Dashboards reference
This document contains a complete reference on Sourcegraph's available dashboards, as well as details on how to interpret the panels and metrics.
To learn more about Sourcegraph's metrics and how to view these dashboards, see our metrics guide.
Frontend
Serves all end-user browser and API requests.
To see this dashboard, visit /-/debug/grafana/d/frontend/frontend
on your Sourcegraph instance.
Frontend: Search at a glance
frontend: 99th_percentile_search_request_duration
99th percentile successful search request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))
frontend: 90th_percentile_search_request_duration
90th percentile successful search request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le)(rate(src_search_streaming_latency_seconds_bucket{source="browser"}[5m])))
frontend: hard_timeout_search_responses
Hard timeout search responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name!="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: hard_error_search_responses
Hard error search responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: partial_timeout_search_responses
Partial timeout search responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: search_alert_user_suggestions
Search alert user suggestions shown every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="browser",request_name!="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name!="CodeIntelSearch"}[5m])) * 100
frontend: page_load_latency
90th percentile page load latency over all routes over 10m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100020
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.9, sum by(le) (rate(src_http_request_duration_seconds_bucket{route!="raw",route!="blob",route!~"graphql.*"}[10m])))
Frontend: Search-based code intelligence at a glance
frontend: 99th_percentile_search_codeintel_request_duration
99th percentile code-intel successful search request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))
frontend: 90th_percentile_search_codeintel_request_duration
90th percentile code-intel successful search request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="browser",request_name="CodeIntelSearch"}[5m])))
frontend: hard_timeout_search_codeintel_responses
Hard timeout search code-intel responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(increase(src_graphql_search_response{status="timeout",source="browser",request_name="CodeIntelSearch"}[5m])) + sum(increase(src_graphql_search_response{status="alert",alert_type="timed_out",source="browser",request_name="CodeIntelSearch"}[5m]))) / sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: hard_error_search_codeintel_responses
Hard error search code-intel responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (status)(increase(src_graphql_search_response{status=~"error",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: partial_timeout_search_codeintel_responses
Partial timeout search code-intel responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (status)(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{status="partial_timeout",source="browser",request_name="CodeIntelSearch"}[5m])) * 100
frontend: search_codeintel_alert_user_suggestions
Search code-intel alert user suggestions shown every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out",source="browser",request_name="CodeIntelSearch"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{source="browser",request_name="CodeIntelSearch"}[5m])) * 100
Frontend: Search GraphQL API usage at a glance
frontend: 99th_percentile_search_api_request_duration
99th percentile successful search API request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))
frontend: 90th_percentile_search_api_request_duration
90th percentile successful search API request duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le)(rate(src_graphql_field_seconds_bucket{type="Search",field="results",error="false",source="other"}[5m])))
frontend: hard_error_search_api_responses
Hard error search API responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (status)(increase(src_graphql_search_response{status=~"error",source="other"}[5m])) / ignoring(status) group_left sum(increase(src_graphql_search_response{source="other"}[5m]))
frontend: partial_timeout_search_api_responses
Partial timeout search API responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_graphql_search_response{status="partial_timeout",source="other"}[5m])) / sum(increase(src_graphql_search_response{source="other"}[5m]))
frontend: search_api_alert_user_suggestions
Search API alert user suggestions shown every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (alert_type)(increase(src_graphql_search_response{status="alert",alert_type!~"timed_out|no_results__suggest_quotes",source="other"}[5m])) / ignoring(alert_type) group_left sum(increase(src_graphql_search_response{status="alert",source="other"}[5m]))
Frontend: Site configuration client update latency
frontend: frontend_site_configuration_duration_since_last_successful_update_by_instance
Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "frontend" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_conf_client_time_since_last_successful_update_seconds{job=~`(sourcegraph-)?frontend`,instance=~`${internalInstance:regex}`}
frontend: frontend_site_configuration_duration_since_last_successful_update_by_instance
Maximum duration since last successful site configuration update (all "frontend" instances)
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`(sourcegraph-)?frontend`,instance=~`${internalInstance:regex}`}[1m]))
Frontend: Codeintel: Precise code intelligence usage at a glance
frontend: codeintel_resolvers_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_resolvers_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_resolvers_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_resolvers_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_resolvers_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_resolvers_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_resolvers_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: Auto-index enqueuer
frontend: codeintel_autoindex_enqueuer_total
Aggregate enqueuer operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_99th_percentile_duration
Aggregate successful enqueuer operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_errors_total
Aggregate enqueuer operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_error_rate
Aggregate enqueuer operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_autoindex_enqueuer_total
Enqueuer operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_99th_percentile_duration
99th percentile successful enqueuer operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindex_enqueuer_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_autoindex_enqueuer_errors_total
Enqueuer operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_autoindex_enqueuer_error_rate
Enqueuer operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindex_enqueuer_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_autoindex_enqueuer_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dbstore stats
frontend: codeintel_uploads_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100603
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_uploads_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_uploads_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100612
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100613
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Workerutil: lsif_indexes dbworker/store stats
frontend: workerutil_dbworker_store_codeintel_index_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_workerutil_dbworker_store_codeintel_index_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100702
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_codeintel_index_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100703
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_index_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_index_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: lsifstore stats
frontend: codeintel_uploads_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100802
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100803
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_uploads_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_uploads_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploads_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100813
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: gitserver client
frontend: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100902
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100903
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100910
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100911
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100912
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=100913
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: uploadstore stats
frontend: codeintel_uploadstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_uploadstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_uploadstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_uploadstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dependencies service stats
frontend: codeintel_dependencies_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_dependencies_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_dependencies_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dependencies service store stats
frontend: codeintel_dependencies_background_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_dependencies_background_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_dependencies_background_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: dependencies service background stats
frontend: codeintel_dependencies_background_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_dependencies_background_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_dependencies_background_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_dependencies_background_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_dependencies_background_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_dependencies_background_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_dependencies_background_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Codeintel: lockfiles service stats
frontend: codeintel_lockfiles_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: codeintel_lockfiles_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_lockfiles_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: codeintel_lockfiles_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: codeintel_lockfiles_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_codeintel_lockfiles_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_codeintel_lockfiles_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Gitserver: Gitserver Client
frontend: gitserver_client_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: gitserver_client_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: gitserver_client_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: gitserver_client_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: dbstore stats
frontend: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101603
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101611
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101612
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101613
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: service stats
frontend: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101702
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101703
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101711
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101712
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101713
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: Workspace execution dbstore
frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101801
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101802
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101803
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Batches: HTTP API File Handler
frontend: batches_httpapi_total
Aggregate http handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_99th_percentile_duration
Aggregate successful http handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101901
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_errors_total
Aggregate http handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101902
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_error_rate
Aggregate http handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101903
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: batches_httpapi_total
Http handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101910
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_99th_percentile_duration
99th percentile successful http handler operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101911
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_batches_httpapi_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: batches_httpapi_errors_total
Http handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101912
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: batches_httpapi_error_rate
Http handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=101913
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op)(increase(src_batches_httpapi_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op)(increase(src_batches_httpapi_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Out-of-band migrations: up migration invocation (one batch processed)
frontend: oobmigration_total
Migration handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_99th_percentile_duration
Aggregate successful migration handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_errors_total
Migration handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_error_rate
Migration handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="up",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Out-of-band migrations: down migration invocation (one batch processed)
frontend: oobmigration_total
Migration handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_99th_percentile_duration
Aggregate successful migration handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_oobmigration_duration_seconds_bucket{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_errors_total
Migration handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: oobmigration_error_rate
Migration handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_oobmigration_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_oobmigration_errors_total{op="down",job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Frontend: Zoekt Configuration GRPC server metrics
frontend: zoekt_configuration_grpc_request_rate_all_methods
Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))
frontend: zoekt_configuration_grpc_request_rate_per_method
Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)
frontend: zoekt_configuration_error_percentage_all_methods
Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102210
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m]))) ))
frontend: zoekt_configuration_grpc_error_percentage_per_method
Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102211
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_configuration_method:regex}`,grpc_code!="OK",instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)) ))
frontend: zoekt_configuration_p99_response_time_per_method
99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102220
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_p90_response_time_per_method
90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102221
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_p75_response_time_per_method
75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102222
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_p99_9_response_size_per_method
99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102230
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_p90_response_size_per_method
90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102231
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_p75_response_size_per_method
75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102232
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_p99_9_invididual_sent_message_size_per_method
99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102240
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_p90_invididual_sent_message_size_per_method
90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102241
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_p75_invididual_sent_message_size_per_method
75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102242
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))
frontend: zoekt_configuration_grpc_response_stream_message_count_per_method
Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102250
on your Sourcegraph instance.
Technical details
Query:
SHELL((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method)))
frontend: zoekt_configuration_grpc_all_codes_per_method
Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102260
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_configuration_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])) by (grpc_method, grpc_code)
Frontend: Zoekt Configuration GRPC "internal error" metrics
frontend: zoekt_configuration_grpc_clients_error_percentage_all_methods
Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102300
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_code!="OK"}[2m])))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))))))
frontend: zoekt_configuration_grpc_clients_error_percentage_per_method
Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102301
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))))))
frontend: zoekt_configuration_grpc_clients_all_codes_per_method
Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_configuration" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102302
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method, grpc_code))
frontend: zoekt_configuration_grpc_clients_internal_error_percentage_all_methods
Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "zoekt_configuration" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102310
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))))))
frontend: zoekt_configuration_grpc_clients_internal_error_percentage_per_method
Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "zoekt_configuration" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102311
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))))))
frontend: zoekt_configuration_grpc_clients_internal_error_all_codes_per_method
Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "zoekt_configuration" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_configuration" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102312
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(grpc_method_status{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",is_internal_error="true",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method, grpc_code))
Frontend: Zoekt Configuration GRPC retry metrics
frontend: zoekt_configuration_grpc_clients_retry_percentage_across_all_methods
Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "zoekt_configuration" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102400
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService"}[2m])))))))
frontend: zoekt_configuration_grpc_clients_retry_percentage_per_method
Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "zoekt_configuration" clients, broken out per method.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102401
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",is_retried="true",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}"}[2m])) by (grpc_method))))))
frontend: zoekt_configuration_grpc_clients_retry_count_per_method
Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "zoekt_configuration" clients, broken out per method
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102402
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"sourcegraph.zoekt.configuration.v1.ZoektConfigurationService",grpc_method=~"${zoekt_configuration_method:regex}",is_retried="true"}[2m])) by (grpc_method))
Frontend: Internal Api GRPC server metrics
frontend: internal_api_grpc_request_rate_all_methods
Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))
frontend: internal_api_grpc_request_rate_per_method
Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)
frontend: internal_api_error_percentage_all_methods
Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102510
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m]))) ))
frontend: internal_api_grpc_error_percentage_per_method
Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102511
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${internal_api_method:regex}`,grpc_code!="OK",instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)) ))
frontend: internal_api_p99_response_time_per_method
99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102520
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_p90_response_time_per_method
90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102521
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_p75_response_time_per_method
75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102522
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_p99_9_response_size_per_method
99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102530
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_p90_response_size_per_method
90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102531
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_p75_response_size_per_method
75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102532
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_p99_9_invididual_sent_message_size_per_method
99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102540
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_p90_invididual_sent_message_size_per_method
90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102541
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_p75_invididual_sent_message_size_per_method
75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102542
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))
frontend: internal_api_grpc_response_stream_message_count_per_method
Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102550
on your Sourcegraph instance.
Technical details
Query:
SHELL((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method)))
frontend: internal_api_grpc_all_codes_per_method
Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102560
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_handled_total{grpc_method=~`${internal_api_method:regex}`,instance=~`${internalInstance:regex}`,grpc_service=~"api.internalapi.v1.ConfigService"}[2m])) by (grpc_method, grpc_code)
Frontend: Internal Api GRPC "internal error" metrics
frontend: internal_api_grpc_clients_error_percentage_all_methods
Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102600
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))))))
frontend: internal_api_grpc_clients_error_percentage_per_method
Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102601
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))))))
frontend: internal_api_grpc_clients_all_codes_per_method
Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "internal_api" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102602
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method, grpc_code))
frontend: internal_api_grpc_clients_internal_error_percentage_all_methods
Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "internal_api" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102610
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))))))
frontend: internal_api_grpc_clients_internal_error_percentage_per_method
Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "internal_api" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102611
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))))))
frontend: internal_api_grpc_clients_internal_error_all_codes_per_method
Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "internal_api" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "internal_api" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102612
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"api.internalapi.v1.ConfigService",is_internal_error="true",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method, grpc_code))
Frontend: Internal Api GRPC retry metrics
frontend: internal_api_grpc_clients_retry_percentage_across_all_methods
Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "internal_api" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102700
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService"}[2m])))))))
frontend: internal_api_grpc_clients_retry_percentage_per_method
Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "internal_api" clients, broken out per method.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102701
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService",is_retried="true",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}"}[2m])) by (grpc_method))))))
frontend: internal_api_grpc_clients_retry_count_per_method
Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "internal_api" clients, broken out per method
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102702
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"api.internalapi.v1.ConfigService",grpc_method=~"${internal_api_method:regex}",is_retried="true"}[2m])) by (grpc_method))
Frontend: Internal service requests
frontend: internal_indexed_search_error_responses
Internal indexed search error responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(code) (increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100
frontend: internal_unindexed_search_error_responses
Internal unindexed search error responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(code) (increase(searcher_service_request_total{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total[5m])) * 100
frontend: 99th_percentile_gitserver_duration
99th percentile successful gitserver query duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102810
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,category)(rate(src_gitserver_request_duration_seconds_bucket{job=~"(sourcegraph-)?frontend"}[5m])))
frontend: gitserver_error_responses
Gitserver error responses every 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102811
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend",code!~"2.."}[5m])) / ignoring(code) group_left sum by (category)(increase(src_gitserver_request_duration_seconds_count{job=~"(sourcegraph-)?frontend"}[5m])) * 100
frontend: observability_test_alert_warning
Warning test alert metric
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102820
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(owner) (observability_test_metric_warning)
frontend: observability_test_alert_critical
Critical test alert metric
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102821
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(owner) (observability_test_metric_critical)
Frontend: Authentication API requests
frontend: sign_in_rate
Rate of API requests to sign-in
Rate (QPS) of requests to sign-in
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))
frontend: sign_in_latency_p99
99 percentile of sign-in latency
99% percentile of sign-in latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102901
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-in",method="post"}[5m])) by (le))
frontend: sign_in_error_rate
Percentage of sign-in requests by http code
Percentage of sign-in requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102902
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (code)(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-in",method="post"}[5m]))*100
frontend: sign_up_rate
Rate of API requests to sign-up
Rate (QPS) of requests to sign-up
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102910
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))
frontend: sign_up_latency_p99
99 percentile of sign-up latency
99% percentile of sign-up latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102911
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-up",method="post"}[5m])) by (le))
frontend: sign_up_code_percentage
Percentage of sign-up requests by http code
Percentage of sign-up requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102912
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (code)(irate(src_http_request_duration_seconds_count{route="sign-up",method="post"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100
frontend: sign_out_rate
Rate of API requests to sign-out
Rate (QPS) of requests to sign-out
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102920
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))
frontend: sign_out_latency_p99
99 percentile of sign-out latency
99% percentile of sign-out latency
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102921
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum(rate(src_http_request_duration_seconds_bucket{route="sign-out"}[5m])) by (le))
frontend: sign_out_error_rate
Percentage of sign-out requests that return non-303 http code
Percentage of sign-out requests grouped by http code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102922
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (code)(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))/ ignoring (code) group_left sum(irate(src_http_request_duration_seconds_count{route="sign-out"}[5m]))*100
frontend: account_failed_sign_in_attempts
Rate of failed sign-in attempts
Failed sign-in attempts per minute
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102930
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_frontend_account_failed_sign_in_attempts_total[1m]))
frontend: account_lockouts
Rate of account lockouts
Account lockouts per minute
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=102931
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_frontend_account_lockouts_total[1m]))
Frontend: External HTTP Request Rate
frontend: external_http_request_rate_by_host
Rate of external HTTP requests by host over 1m
Shows the rate of external HTTP requests made by Sourcegraph to other services, broken down by host.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (host) (rate(src_http_client_external_request_count{host=~`${httpRequestHost:regex}`}[1m]))
frontend: external_http_request_rate_by_host_by_code
Rate of external HTTP requests by host and response code over 1m
Shows the rate of external HTTP requests made by Sourcegraph to other services, broken down by host and response code.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (host, status_code) (rate(src_http_client_external_request_count{host=~`${httpRequestHost:regex}`}[1m]))
Frontend: Cody API requests
frontend: cody_api_rate
Rate of API requests to cody endpoints (excluding GraphQL)
Rate (QPS) of requests to cody related endpoints. completions.stream is for the conversational endpoints. completions.code is for the code auto-complete endpoints.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (route, code)(irate(src_http_request_duration_seconds_count{route=~"^completions.*"}[5m]))
Frontend: Cloud KMS and cache
frontend: cloudkms_cryptographic_requests
Cryptographic requests to Cloud KMS every 1m
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_cloudkms_cryptographic_total[1m]))
frontend: encryption_cache_hit_ratio
Average encryption cache hit ratio per workload
- Encryption cache hit ratio (hits/(hits+misses)) - minimum across all instances of a workload.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103201
on your Sourcegraph instance.
Technical details
Query:
SHELLmin by (kubernetes_name) (src_encryption_cache_hit_total/(src_encryption_cache_hit_total+src_encryption_cache_miss_total))
frontend: encryption_cache_evictions
Rate of encryption cache evictions - sum across all instances of a given workload
- Rate of encryption cache evictions (caused by cache exceeding its maximum size) - sum across all instances of a workload
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (kubernetes_name) (irate(src_encryption_cache_eviction_total[5m]))
Frontend: Database connections
frontend: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="frontend"})
frontend: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_open{app_name="frontend"})
frontend: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="frontend"})
frontend: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103311
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_idle{app_name="frontend"})
frontend: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103320
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="frontend"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="frontend"}[5m]))
frontend: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103330
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="frontend"}[5m]))
frontend: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103331
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="frontend"}[5m]))
frontend: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103332
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="frontend"}[5m]))
Frontend: Container monitoring (not available on server)
frontend: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod (frontend\|sourcegraph-frontend)
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p (frontend\|sourcegraph-frontend)
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' (frontend\|sourcegraph-frontend)
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the (frontend|sourcegraph-frontend) container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs (frontend\|sourcegraph-frontend)
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103400
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend).*"}) > 60)
frontend: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103401
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
frontend: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103402
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}
frontend: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]) + rate(container_fs_writes_total{name=~"^(frontend|sourcegraph-frontend).*"}[1h]))
Frontend: Provisioning indicators (not available on server)
frontend: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103500
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])
frontend: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103501
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[1d])
frontend: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103510
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])
frontend: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103511
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend).*"}[5m])
frontend: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103512
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^(frontend|sourcegraph-frontend).*"})
Frontend: Golang runtime monitoring
frontend: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103600
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_goroutines{job=~".*(frontend|sourcegraph-frontend)"})
frontend: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103601
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_gc_duration_seconds{job=~".*(frontend|sourcegraph-frontend)"})
Frontend: Kubernetes monitoring (only available on Kubernetes)
frontend: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*(frontend|sourcegraph-frontend)"}) / count by (app) (up{app=~".*(frontend|sourcegraph-frontend)"}) * 100
Frontend: Search: Ranking
frontend: total_search_clicks
Total number of search clicks over 6h
The total number of search clicks across all search types over a 6 hour window.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (ranked) (increase(src_search_ranking_result_clicked_count[6h]))
frontend: percent_clicks_on_top_search_result
Percent of clicks on top search result over 6h
The percent of clicks that were on the top search result, excluding searches with very few results (3 or fewer).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (ranked) (increase(src_search_ranking_result_clicked_bucket{le="1",resultsLength=">3"}[6h])) / sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h])) * 100
frontend: percent_clicks_on_top_3_search_results
Percent of clicks on top 3 search results over 6h
The percent of clicks that were on the first 3 search results, excluding searches with very few results (3 or fewer).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103802
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (ranked) (increase(src_search_ranking_result_clicked_bucket{le="3",resultsLength=">3"}[6h])) / sum by (ranked) (increase(src_search_ranking_result_clicked_count[6h])) * 100
frontend: distribution_of_clicked_search_result_type_over_6h_in_percent
Distribution of clicked search result type over 6h
The distribution of clicked search results by result type. At every point in time, the values should sum to 100.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103810
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_search_ranking_result_clicked_count{type="repo"}[6h])) / sum(increase(src_search_ranking_result_clicked_count[6h])) * 100
frontend: percent_zoekt_searches_hitting_flush_limit
Percent of zoekt searches that hit the flush time limit
The percent of Zoekt searches that hit the flush time limit. These searches don`t visit all matches, so they could be missing relevant results, or be non-deterministic.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103811
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(zoekt_final_aggregate_size_count{reason="timer_expired"}[1d])) / sum(increase(zoekt_final_aggregate_size_count[1d])) * 100
Frontend: Email delivery
frontend: email_delivery_failures
Email delivery failure rate over 30 minutes
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_email_send{success="false"}[30m])) / sum(increase(src_email_send[30m])) * 100
frontend: email_deliveries_total
Total emails successfully delivered every 30 minutes
Total emails successfully delivered.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103910
on your Sourcegraph instance.
Technical details
Query:
SHELLsum (increase(src_email_send{success="true"}[30m]))
frontend: email_deliveries_by_source
Emails successfully delivered every 30 minutes by source
Emails successfully delivered by source, i.e. product feature.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=103911
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (email_source) (increase(src_email_send{success="true"}[30m]))
Frontend: Sentinel queries (only on sourcegraph.com)
frontend: mean_successful_sentinel_duration_over_2h
Mean successful sentinel search duration over 2h
Mean search duration for all successful sentinel queries
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_search_response_latency_seconds_sum{source=~`searchblitz.*`, status=`success`}[2h])) / sum(rate(src_search_response_latency_seconds_count{source=~`searchblitz.*`, status=`success`}[2h]))
frontend: mean_sentinel_stream_latency_over_2h
Mean successful sentinel stream latency over 2h
Mean time to first result for all successful streaming sentinel queries
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[2h])) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[2h]))
frontend: 90th_percentile_successful_sentinel_duration_over_2h
90th percentile successful sentinel search duration over 2h
90th percentile search duration for all successful sentinel queries
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104010
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))
frontend: 90th_percentile_sentinel_stream_latency_over_2h
90th percentile successful sentinel stream latency over 2h
90th percentile time to first result for all successful streaming sentinel queries
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le)(label_replace(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[2h]), "source", "$1", "source", "searchblitz_(.*)")))
frontend: mean_successful_sentinel_duration_by_query
Mean successful sentinel search duration by query
Mean search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104020
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_search_response_latency_seconds_sum{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_response_latency_seconds_count{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (source)
frontend: mean_sentinel_stream_latency_by_query
Mean successful sentinel stream latency by query
Mean time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104021
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_search_streaming_latency_seconds_sum{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source) / sum(rate(src_search_streaming_latency_seconds_count{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (source)
frontend: 90th_percentile_successful_sentinel_duration_by_query
90th percentile successful sentinel search duration by query
90th percentile search duration for successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104030
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))
frontend: 90th_percentile_successful_stream_latency_by_query
90th percentile successful sentinel stream latency by query
90th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104031
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))
frontend: 90th_percentile_unsuccessful_duration_by_query
90th percentile unsuccessful sentinel search duration by query
90th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104040
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum(rate(src_search_response_latency_seconds_bucket{source=~`searchblitz.*`, status!=`success`}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_successful_sentinel_duration_by_query
75th percentile successful sentinel search duration by query
75th percentile search duration of successful sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104050
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~"searchblitz.*", status="success"}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_successful_stream_latency_by_query
75th percentile successful sentinel stream latency by query
75th percentile time to first result for successful streaming sentinel queries, broken down by query. Useful for debugging whether a slowdown is limited to a specific type of query.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104051
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum(rate(src_search_streaming_latency_seconds_bucket{source=~"searchblitz.*"}[$sentinel_sampling_duration])) by (le, source))
frontend: 75th_percentile_unsuccessful_duration_by_query
75th percentile unsuccessful sentinel search duration by query
75th percentile search duration of unsuccessful sentinel queries (by error or timeout), broken down by query. Useful for debugging how the performance of failed requests affect UX.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104060
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum(rate(src_search_response_latency_seconds_bucket{source=~`searchblitz.*`, status!=`success`}[$sentinel_sampling_duration])) by (le, source))
frontend: unsuccessful_status_rate
Unsuccessful status rate
The rate of unsuccessful sentinel queries, broken down by failure type.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104070
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_graphql_search_response{source=~"searchblitz.*", status!="success"}[$sentinel_sampling_duration])) by (status)
Frontend: Incoming webhooks
frontend: p95_time_to_handle_incoming_webhooks
P95 time to handle incoming webhooks
p95 response time to incoming webhook requests from code hosts.
Increases in response time can point to too much load on the database to keep up with the incoming requests.
See this documentation page for more details on webhook requests: (https://sourcegraph.com/docs/admin/config/webhooks/incoming)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104100
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum (rate(src_http_request_duration_seconds_bucket{route=~"webhooks|github.webhooks|gitlab.webhooks|bitbucketServer.webhooks|bitbucketCloud.webhooks"}[5m])) by (le, route))
Frontend: Search aggregations: proactive and expanded search aggregations
frontend: insights_aggregations_total
Aggregate search aggregations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_99th_percentile_duration
Aggregate successful search aggregations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_errors_total
Aggregate search aggregations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_error_rate
Aggregate search aggregations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
frontend: insights_aggregations_total
Search aggregations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_99th_percentile_duration
99th percentile successful search aggregations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op,extended_mode)(rate(src_insights_aggregations_duration_seconds_bucket{job=~"^(frontend|sourcegraph-frontend).*"}[5m])))
frontend: insights_aggregations_errors_total
Search aggregations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))
frontend: insights_aggregations_error_rate
Search aggregations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/frontend/frontend?viewPanel=104213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) / (sum by (op,extended_mode)(increase(src_insights_aggregations_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m])) + sum by (op,extended_mode)(increase(src_insights_aggregations_errors_total{job=~"^(frontend|sourcegraph-frontend).*"}[5m]))) * 100
Git Server
Stores, manages, and operates Git repositories.
To see this dashboard, visit /-/debug/grafana/d/gitserver/gitserver
on your Sourcegraph instance.
gitserver: go_routines
Go routines
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLgo_goroutines{app="gitserver", instance=~`${shard:regex}`}
gitserver: cpu_throttling_time
Container CPU throttling time %
- A high value indicates that the container is spending too much time waiting for CPU cycles.
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (container_label_io_kubernetes_pod_name) ((rate(container_cpu_cfs_throttled_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~`${shard:regex}`}[5m]) / rate(container_cpu_cfs_periods_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~`${shard:regex}`}[5m])) * 100)
gitserver: cpu_usage_seconds
Cpu usage seconds
- This value should not exceed 75% of the CPU limit over a longer period of time.
-
We cannot alert on this as we don`t know the resource allocation.
-
If this value is high for a longer time, consider increasing the CPU limit for the container.
-
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (container_label_io_kubernetes_pod_name) (rate(container_cpu_usage_seconds_total{container_label_io_kubernetes_container_name="gitserver", container_label_io_kubernetes_pod_name=~`${shard:regex}`}[5m]))
gitserver: disk_space_remaining
Disk space remaining
Indicates disk space remaining for each gitserver instance, which is used to determine when to start evicting least-used repository clones from disk (default 10%, configured by SRC_REPOS_DESIRED_PERCENT_FREE
).
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100020
on your Sourcegraph instance.
Technical details
Query:
SHELL(src_gitserver_disk_space_available{instance=~`${shard:regex}`} / src_gitserver_disk_space_total{instance=~`${shard:regex}`}) * 100
gitserver: running_git_commands
Git commands running on each gitserver instance
A high value signals load.
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100030
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance, cmd) (src_gitserver_exec_running{instance=~`${shard:regex}`})
gitserver: git_commands_received
Rate of git commands received
per second rate per command
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100031
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (cmd) (rate(src_gitserver_exec_duration_seconds_count{instance=~`${shard:regex}`}[5m]))
gitserver: echo_command_duration_test
Echo test command duration
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100040
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_gitserver_echo_duration_seconds)
gitserver: repo_corrupted
Number of times a repo corruption has been identified
A non-null value here indicates that a problem has been detected with the gitserver repository storage. Repository corruptions are never expected. This is a real issue. Gitserver should try to recover from them by recloning repositories, but this may take a while depending on repo size.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100041
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_gitserver_repo_corrupted[5m]))
gitserver: repository_clone_queue_size
Repository clone queue size
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100050
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_gitserver_clone_queue)
gitserver: src_gitserver_repo_count
Number of repositories on gitserver
This metric is only for informational purposes. It indicates the total number of repositories on gitserver.
It does not indicate any problems with the instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100051
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_gitserver_repo_count
Git Server: Gitservice for internal cloning
gitserver: aggregate_gitservice_request_duration
95th percentile gitservice request duration aggregate
A high value means any internal service trying to clone a repo from gitserver is slowed down.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=`gitserver`, error=`false`}[5m])) by (le))
gitserver: gitservice_request_duration
95th percentile gitservice request duration per shard
A high value means any internal service trying to clone a repo from gitserver is slowed down.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=`gitserver`, error=`false`, instance=~`${shard:regex}`}[5m])) by (le, instance))
gitserver: aggregate_gitservice_error_request_duration
95th percentile gitservice error request duration aggregate
95th percentile gitservice error request duration aggregate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=`gitserver`, error=`true`}[5m])) by (le))
gitserver: gitservice_request_duration
95th percentile gitservice error request duration per shard
95th percentile gitservice error request duration per shard
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_gitserver_gitservice_duration_seconds_bucket{type=`gitserver`, error=`true`, instance=~`${shard:regex}`}[5m])) by (le, instance))
gitserver: aggregate_gitservice_request_rate
Aggregate gitservice request rate
Aggregate gitservice request rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100120
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_gitserver_gitservice_duration_seconds_count{type=`gitserver`, error=`false`}[5m]))
gitserver: gitservice_request_rate
Gitservice request rate per shard
Per shard gitservice request rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100121
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_gitserver_gitservice_duration_seconds_count{type=`gitserver`, error=`false`, instance=~`${shard:regex}`}[5m]))
gitserver: aggregate_gitservice_request_error_rate
Aggregate gitservice request error rate
Aggregate gitservice request error rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100130
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_gitserver_gitservice_duration_seconds_count{type=`gitserver`, error=`true`}[5m]))
gitserver: gitservice_request_error_rate
Gitservice request error rate per shard
Per shard gitservice request error rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100131
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(src_gitserver_gitservice_duration_seconds_count{type=`gitserver`, error=`true`, instance=~`${shard:regex}`}[5m]))
gitserver: aggregate_gitservice_requests_running
Aggregate gitservice requests running
Aggregate gitservice requests running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100140
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_gitserver_gitservice_running{type=`gitserver`})
gitserver: gitservice_requests_running
Gitservice requests running per shard
Per shard gitservice requests running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100141
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_gitserver_gitservice_running{type=`gitserver`, instance=~`${shard:regex}`}) by (instance)
Git Server: Gitserver cleanup jobs
gitserver: janitor_running
Janitor process is running
1, if the janitor process is currently running
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (instance) (src_gitserver_janitor_running{instance=~`${shard:regex}`})
gitserver: janitor_job_duration
95th percentile job run duration
95th percentile job run duration
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_gitserver_janitor_job_duration_seconds_bucket{instance=~`${shard:regex}`}[5m])) by (le, job_name))
gitserver: janitor_job_failures
Failures over 5m (by job)
the rate of failures over 5m (by job)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100220
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (job_name) (rate(src_gitserver_janitor_job_duration_seconds_count{instance=~`${shard:regex}`,success="false"}[5m]))
gitserver: repos_removed
Repositories removed due to disk pressure
Repositories removed due to disk pressure
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100230
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (rate(src_gitserver_repos_removed_disk_pressure{instance=~`${shard:regex}`}[5m]))
gitserver: non_existent_repos_removed
Repositories removed because they are not defined in the DB
Repositoriess removed because they are not defined in the DB
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100240
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (increase(src_gitserver_non_existing_repos_removed[5m]))
gitserver: sg_maintenance_reason
Successful sg maintenance jobs over 1h (by reason)
the rate of successful sg maintenance jobs and the reason why they were triggered
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100250
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (reason) (rate(src_gitserver_maintenance_status{success="true"}[1h]))
gitserver: git_prune_skipped
Successful git prune jobs over 1h
the rate of successful git prune jobs over 1h and whether they were skipped
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100260
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (skipped) (rate(src_gitserver_prune_status{success="true"}[1h]))
Git Server: Search
gitserver: search_latency
Mean time until first result is sent
Mean latency (time to first result) of gitserver search requests
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_gitserver_search_latency_seconds_sum[5m]) / rate(src_gitserver_search_latency_seconds_count[5m])
gitserver: search_duration
Mean search duration
Mean duration of gitserver search requests
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_gitserver_search_duration_seconds_sum[5m]) / rate(src_gitserver_search_duration_seconds_count[5m])
gitserver: search_rate
Rate of searches run by pod
The rate of searches executed on gitserver by pod
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_gitserver_search_latency_seconds_count{instance=~`${shard:regex}`}[5m])
gitserver: running_searches
Number of searches currently running by pod
The number of searches currently executing on gitserver by pod
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (src_gitserver_search_running{instance=~`${shard:regex}`})
Git Server: Gitserver: Gitserver Backend
gitserver: concurrent_backend_operations
Number of concurrently running backend operations
The number of requests that are currently being handled by gitserver backend layer, at the point in time of scraping.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_gitserver_backend_concurrent_operations
gitserver: gitserver_backend_total
Aggregate operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_backend_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_backend_99th_percentile_duration
Aggregate successful operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_gitserver_backend_duration_seconds_bucket{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_backend_errors_total
Aggregate operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_backend_error_rate
Aggregate operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m])) / (sum(increase(src_gitserver_backend_total{job=~"^gitserver.*"}[5m])) + sum(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m]))) * 100
gitserver: gitserver_backend_total
operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100420
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_gitserver_backend_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_backend_99th_percentile_duration
99th percentile successful operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100421
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_gitserver_backend_duration_seconds_bucket{job=~"^gitserver.*"}[5m])))
gitserver: gitserver_backend_errors_total
operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100422
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m]))
gitserver: gitserver_backend_error_rate
operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100423
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_gitserver_backend_total{job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_gitserver_backend_errors_total{job=~"^gitserver.*"}[5m]))) * 100
Git Server: Gitserver: Gitserver Client
gitserver: gitserver_client_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_total{job=~"^*.*"}[5m]))
gitserver: gitserver_client_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^*.*"}[5m]))
gitserver: gitserver_client_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))
gitserver: gitserver_client_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^*.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))) * 100
gitserver: gitserver_client_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_total{job=~"^*.*"}[5m]))
gitserver: gitserver_client_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^*.*"}[5m])))
gitserver: gitserver_client_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))
gitserver: gitserver_client_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^*.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^*.*"}[5m]))) * 100
Git Server: Repos disk I/O metrics
gitserver: repos_disk_reads_sec
Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))
gitserver: repos_disk_writes_sec
Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))
gitserver: repos_disk_read_throughput
Read throughput over 1m (per instance)
The amount of data that was read from the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))
gitserver: repos_disk_write_throughput
Write throughput over 1m (per instance)
The amount of data that was written to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))
gitserver: repos_disk_read_duration
Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100620
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
gitserver: repos_disk_write_duration
Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100621
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
gitserver: repos_disk_read_request_size
Average read request size over 1m (per instance)
The average size of read requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100630
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
gitserver: repos_disk_write_request_size)
Average write request size over 1m (per instance)
The average size of write requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100631
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
gitserver: repos_disk_reads_merged_sec
Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100640
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~`node-exporter.*`}[1m])))))
gitserver: repos_disk_writes_merged_sec
Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100641
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~`node-exporter.*`}[1m])))))
gitserver: repos_disk_average_queue_size
Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), gitserver could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device gitserver is using, not the load gitserver is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100650
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (gitserver_mount_point_info{mount_name="reposDir",instance=~`${shard:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~`node-exporter.*`}[1m])))))
Git Server: Gitserver GRPC server metrics
gitserver: gitserver_grpc_request_rate_all_methods
Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m]))
gitserver: gitserver_grpc_request_rate_per_method
Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)
gitserver: gitserver_error_percentage_all_methods
Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m]))) ))
gitserver: gitserver_grpc_error_percentage_per_method
Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${gitserver_method:regex}`,grpc_code!="OK",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)) ))
gitserver: gitserver_p99_response_time_per_method
99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100720
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_p90_response_time_per_method
90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100721
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_p75_response_time_per_method
75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100722
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_p99_9_response_size_per_method
99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100730
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_p90_response_size_per_method
90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100731
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_p75_response_size_per_method
75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100732
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_p99_9_invididual_sent_message_size_per_method
99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100740
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_p90_invididual_sent_message_size_per_method
90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100741
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_p75_invididual_sent_message_size_per_method
75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100742
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])))
gitserver: gitserver_grpc_response_stream_message_count_per_method
Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100750
on your Sourcegraph instance.
Technical details
Query:
SHELL((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method)))
gitserver: gitserver_grpc_all_codes_per_method
Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100760
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_handled_total{grpc_method=~`${gitserver_method:regex}`,instance=~`${shard:regex}`,grpc_service=~"gitserver.v1.GitserverService"}[2m])) by (grpc_method, grpc_code)
Git Server: Gitserver GRPC "internal error" metrics
gitserver: gitserver_grpc_clients_error_percentage_all_methods
Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService"}[2m])))))))
gitserver: gitserver_grpc_clients_error_percentage_per_method
Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method))))))
gitserver: gitserver_grpc_clients_all_codes_per_method
Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "gitserver" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100802
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method, grpc_code))
gitserver: gitserver_grpc_clients_internal_error_percentage_all_methods
Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "gitserver" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService"}[2m])))))))
gitserver: gitserver_grpc_clients_internal_error_percentage_per_method
Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "gitserver" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method))))))
gitserver: gitserver_grpc_clients_internal_error_all_codes_per_method
Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "gitserver" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "gitserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"gitserver.v1.GitserverService",is_internal_error="true",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method, grpc_code))
Git Server: Gitserver GRPC retry metrics
gitserver: gitserver_grpc_clients_retry_percentage_across_all_methods
Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "gitserver" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService"}[2m])))))))
gitserver: gitserver_grpc_clients_retry_percentage_per_method
Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "gitserver" clients, broken out per method.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService",is_retried="true",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}"}[2m])) by (grpc_method))))))
gitserver: gitserver_grpc_clients_retry_count_per_method
Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "gitserver" clients, broken out per method
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=100902
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"gitserver.v1.GitserverService",grpc_method=~"${gitserver_method:regex}",is_retried="true"}[2m])) by (grpc_method))
Git Server: Site configuration client update latency
gitserver: gitserver_site_configuration_duration_since_last_successful_update_by_instance
Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "gitserver" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_conf_client_time_since_last_successful_update_seconds{job=~`.*gitserver`,instance=~`${shard:regex}`}
gitserver: gitserver_site_configuration_duration_since_last_successful_update_by_instance
Maximum duration since last successful site configuration update (all "gitserver" instances)
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`.*gitserver`,instance=~`${shard:regex}`}[1m]))
Git Server: Codeintel: Coursier invocation stats
gitserver: codeintel_coursier_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
gitserver: codeintel_coursier_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))
gitserver: codeintel_coursier_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_coursier_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
Git Server: Codeintel: npm invocation stats
gitserver: codeintel_npm_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
gitserver: codeintel_npm_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^gitserver.*"}[5m])))
gitserver: codeintel_npm_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))
gitserver: codeintel_npm_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^gitserver.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^gitserver.*"}[5m]))) * 100
Git Server: HTTP handlers
gitserver: healthy_request_rate
Requests per second, by route, when status code is 200
The number of healthy HTTP requests per second to internal HTTP api
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code=~"2.."}[5m]))
gitserver: unhealthy_request_rate
Requests per second, by route, when status code is not 200
The number of unhealthy HTTP requests per second to internal HTTP api
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (route) (rate(src_http_request_duration_seconds_count{app="gitserver",code!~"2.."}[5m]))
gitserver: request_rate_by_code
Requests per second, by status code
The number of HTTP requests per second by code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (code) (rate(src_http_request_duration_seconds_count{app="gitserver"}[5m]))
gitserver: 95th_percentile_healthy_requests
95th percentile duration by route, when status code is 200
The 95th percentile duration by route when the status code is 200
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101310
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code=~"2.."}[5m])) by (le, route))
gitserver: 95th_percentile_unhealthy_requests
95th percentile duration by route, when status code is not 200
The 95th percentile duration by route when the status code is not 200
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="gitserver",code!~"2.."}[5m])) by (le, route))
Git Server: Database connections
gitserver: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="gitserver"})
gitserver: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_open{app_name="gitserver"})
gitserver: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="gitserver"})
gitserver: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101411
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_idle{app_name="gitserver"})
gitserver: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101420
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="gitserver"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="gitserver"}[5m]))
gitserver: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101430
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="gitserver"}[5m]))
gitserver: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101431
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="gitserver"}[5m]))
gitserver: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101432
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="gitserver"}[5m]))
Git Server: Container monitoring (not available on server)
gitserver: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod gitserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p gitserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' gitserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the gitserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs gitserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101500
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^gitserver.*"}) > 60)
gitserver: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101501
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}
gitserver: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101502
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}
gitserver: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^gitserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^gitserver.*"}[1h]))
Git Server: Provisioning indicators (not available on server)
gitserver: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101600
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[1d])
gitserver: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Git Server is expected to use up all the memory it is provided.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101601
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[1d])
gitserver: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101610
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^gitserver.*"}[5m])
gitserver: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Git Server is expected to use up all the memory it is provided.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101611
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^gitserver.*"}[5m])
gitserver: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101612
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^gitserver.*"})
Git Server: Golang runtime monitoring
gitserver: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101700
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_goroutines{job=~".*gitserver"})
gitserver: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101701
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_gc_duration_seconds{job=~".*gitserver"})
Git Server: Kubernetes monitoring (only available on Kubernetes)
gitserver: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/gitserver/gitserver?viewPanel=101800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*gitserver"}) / count by (app) (up{app=~".*gitserver"}) * 100
Postgres
Postgres metrics, exported from postgres_exporter (not available on server).
To see this dashboard, visit /-/debug/grafana/d/postgres/postgres
on your Sourcegraph instance.
postgres: connections
Active connections
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (job) (pg_stat_activity_count{datname!~"template.*|postgres|cloudsqladmin"}) OR sum by (job) (pg_stat_activity_count{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})
postgres: usage_connections_percentage
Connection in use
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(pg_stat_activity_count) by (job) / (sum(pg_settings_max_connections) by (job) - sum(pg_settings_superuser_reserved_connections) by (job)) * 100
postgres: transaction_durations
Maximum transaction durations
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (job) (pg_stat_activity_max_tx_duration{datname!~"template.*|postgres|cloudsqladmin",job!="codeintel-db"}) OR sum by (job) (pg_stat_activity_max_tx_duration{job="codeinsights-db", datname!~"template.*|cloudsqladmin"})
Postgres: Database and collector status
postgres: postgres_up
Database availability
A non-zero value indicates the database is online.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLpg_up
postgres: invalid_indexes
Invalid indexes (unusable by the query planner)
A non-zero value indicates the that Postgres failed to build an index. Expect degraded performance until the index is manually rebuilt.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (relname)(pg_invalid_index_count)
postgres: pg_exporter_err
Errors scraping postgres exporter
This value indicates issues retrieving metrics from postgres_exporter.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLpg_exporter_last_scrape_error
postgres: migration_in_progress
Active schema migration
A 0 value indicates that no migration is in progress.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLpg_sg_migration_status
Postgres: Object size and bloat
postgres: pg_table_size
Table size
Total size of this table
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (relname)(pg_table_bloat_size)
postgres: pg_table_bloat_ratio
Table bloat ratio
Estimated bloat ratio of this table (high bloat = high overhead)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (relname)(pg_table_bloat_ratio) * 100
postgres: pg_index_size
Index size
Total size of this index
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (relname)(pg_index_bloat_size)
postgres: pg_index_bloat_ratio
Index bloat ratio
Estimated bloat ratio of this index (high bloat = high overhead)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (relname)(pg_index_bloat_ratio) * 100
Postgres: Provisioning indicators (not available on server)
postgres: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])
postgres: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[1d])
postgres: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])
postgres: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(pgsql|codeintel-db|codeinsights).*"}[5m])
postgres: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^(pgsql|codeintel-db|codeinsights).*"})
Postgres: Kubernetes monitoring (only available on Kubernetes)
postgres: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/postgres/postgres?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) / count by (app) (up{app=~".*(pgsql|codeintel-db|codeinsights)"}) * 100
Precise Code Intel Worker
Handles conversion of uploaded precise code intelligence bundles.
To see this dashboard, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker
on your Sourcegraph instance.
Precise Code Intel Worker: Codeintel: LSIF uploads
precise-code-intel-worker: codeintel_upload_queue_size
Unprocessed upload record queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"})
precise-code-intel-worker: codeintel_upload_queue_growth_rate
Unprocessed upload record queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[30m])) / sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[30m]))
precise-code-intel-worker: codeintel_upload_queued_max_age
Unprocessed upload record queue longest time in queue
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_upload_queued_duration_seconds_total{job=~"^precise-code-intel-worker.*"})
Precise Code Intel Worker: Codeintel: LSIF uploads
precise-code-intel-worker: codeintel_upload_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_codeintel_upload_processor_handlers{job=~"^precise-code-intel-worker.*"})
precise-code-intel-worker: codeintel_upload_processor_upload_size
Sum of upload sizes in bytes being processed by each precise code-intel worker instance
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(instance) (src_codeintel_upload_processor_upload_size{job="precise-code-intel-worker"})
precise-code-intel-worker: codeintel_upload_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_upload_processor_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_upload_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_upload_processor_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_upload_processor_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: dbstore stats
precise-code-intel-worker: codeintel_uploads_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_uploads_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_uploads_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: lsifstore stats
precise-code-intel-worker: codeintel_uploads_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_uploads_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_uploads_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploads_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Workerutil: lsif_uploads dbworker/store stats
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_workerutil_dbworker_store_codeintel_upload_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: workerutil_dbworker_store_codeintel_upload_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_upload_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_upload_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: gitserver client
precise-code-intel-worker: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Codeintel: uploadstore stats
precise-code-intel-worker: codeintel_uploadstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100603
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
precise-code-intel-worker: codeintel_uploadstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploadstore_duration_seconds_bucket{job=~"^precise-code-intel-worker.*"}[5m])))
precise-code-intel-worker: codeintel_uploadstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100612
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))
precise-code-intel-worker: codeintel_uploadstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100613
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploadstore_total{job=~"^precise-code-intel-worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploadstore_errors_total{job=~"^precise-code-intel-worker.*"}[5m]))) * 100
Precise Code Intel Worker: Database connections
precise-code-intel-worker: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="precise-code-intel-worker"})
precise-code-intel-worker: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_open{app_name="precise-code-intel-worker"})
precise-code-intel-worker: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="precise-code-intel-worker"})
precise-code-intel-worker: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_idle{app_name="precise-code-intel-worker"})
precise-code-intel-worker: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100720
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="precise-code-intel-worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100730
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100731
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="precise-code-intel-worker"}[5m]))
precise-code-intel-worker: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100732
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="precise-code-intel-worker"}[5m]))
Precise Code Intel Worker: Container monitoring (not available on server)
precise-code-intel-worker: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod precise-code-intel-worker
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p precise-code-intel-worker
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' precise-code-intel-worker
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the precise-code-intel-worker container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs precise-code-intel-worker
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^precise-code-intel-worker.*"}) > 60)
precise-code-intel-worker: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
precise-code-intel-worker: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100802
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}
precise-code-intel-worker: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100803
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^precise-code-intel-worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^precise-code-intel-worker.*"}[1h]))
Precise Code Intel Worker: Provisioning indicators (not available on server)
precise-code-intel-worker: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])
precise-code-intel-worker: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[1d])
precise-code-intel-worker: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100910
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])
precise-code-intel-worker: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100911
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^precise-code-intel-worker.*"}[5m])
precise-code-intel-worker: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=100912
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^precise-code-intel-worker.*"})
Precise Code Intel Worker: Golang runtime monitoring
precise-code-intel-worker: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_goroutines{job=~".*precise-code-intel-worker"})
precise-code-intel-worker: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_gc_duration_seconds{job=~".*precise-code-intel-worker"})
Precise Code Intel Worker: Kubernetes monitoring (only available on Kubernetes)
precise-code-intel-worker: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/precise-code-intel-worker/precise-code-intel-worker?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*precise-code-intel-worker"}) / count by (app) (up{app=~".*precise-code-intel-worker"}) * 100
Redis
Metrics from both redis databases.
To see this dashboard, visit /-/debug/grafana/d/redis/redis
on your Sourcegraph instance.
Redis: Redis Store
redis: redis-store_up
Redis-store availability
A value of 1 indicates the service is currently running
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLredis_up{app="redis-store"}
Redis: Redis Cache
redis: redis-cache_up
Redis-cache availability
A value of 1 indicates the service is currently running
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLredis_up{app="redis-cache"}
Redis: Provisioning indicators (not available on server)
redis: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[1d])
redis: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[1d])
redis: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-cache.*"}[5m])
redis: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-cache.*"}[5m])
redis: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^redis-cache.*"})
Redis: Provisioning indicators (not available on server)
redis: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[1d])
redis: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[1d])
redis: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^redis-store.*"}[5m])
redis: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^redis-store.*"}[5m])
redis: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^redis-store.*"})
Redis: Kubernetes monitoring (only available on Kubernetes)
redis: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*redis-cache"}) / count by (app) (up{app=~".*redis-cache"}) * 100
Redis: Kubernetes monitoring (only available on Kubernetes)
redis: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/redis/redis?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*redis-store"}) / count by (app) (up{app=~".*redis-store"}) * 100
Worker
Manages background processes.
To see this dashboard, visit /-/debug/grafana/d/worker/worker
on your Sourcegraph instance.
Worker: Active jobs
worker: worker_job_count
Number of worker instances running each job
The number of worker instances running each job type. It is necessary for each job type to be managed by at least one worker instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (job_name) (src_worker_jobs{job=~"^worker.*"})
worker: worker_job_codeintel-upload-janitor_count
Number of worker instances running the codeintel-upload-janitor job
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum (src_worker_jobs{job=~"^worker.*", job_name="codeintel-upload-janitor"})
worker: worker_job_codeintel-commitgraph-updater_count
Number of worker instances running the codeintel-commitgraph-updater job
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum (src_worker_jobs{job=~"^worker.*", job_name="codeintel-commitgraph-updater"})
worker: worker_job_codeintel-autoindexing-scheduler_count
Number of worker instances running the codeintel-autoindexing-scheduler job
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum (src_worker_jobs{job=~"^worker.*", job_name="codeintel-autoindexing-scheduler"})
Worker: Database record encrypter
worker: records_encrypted_at_rest_percentage
Percentage of database records encrypted at rest
Percentage of encrypted database records
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELL(max(src_records_encrypted_at_rest_total) by (tableName)) / ((max(src_records_encrypted_at_rest_total) by (tableName)) + (max(src_records_unencrypted_at_rest_total) by (tableName))) * 100
worker: records_encrypted_total
Database records encrypted every 5m
Number of encrypted database records every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (tableName)(increase(src_records_encrypted_total{job=~"^worker.*"}[5m]))
worker: records_decrypted_total
Database records decrypted every 5m
Number of encrypted database records every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (tableName)(increase(src_records_decrypted_total{job=~"^worker.*"}[5m]))
worker: record_encryption_errors_total
Encryption operation errors every 5m
Number of database record encryption/decryption errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_record_encryption_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: Repository with stale commit graph
worker: codeintel_commit_graph_queue_size
Repository queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_commit_graph_total{job=~"^worker.*"})
worker: codeintel_commit_graph_queue_growth_rate
Repository queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_commit_graph_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[30m]))
worker: codeintel_commit_graph_queued_max_age
Repository queue longest time in queue
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_commit_graph_queued_duration_seconds_total{job=~"^worker.*"})
Worker: Codeintel: Repository commit graph updates
worker: codeintel_commit_graph_processor_total
Update operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_99th_percentile_duration
Aggregate successful update operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_commit_graph_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_errors_total
Update operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_commit_graph_processor_error_rate
Update operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_commit_graph_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_commit_graph_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Dependency index job
worker: codeintel_dependency_index_queue_size
Dependency index job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_dependency_index_total{job=~"^worker.*"})
worker: codeintel_dependency_index_queue_growth_rate
Dependency index job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependency_index_total{job=~"^worker.*"}[30m])) / sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[30m]))
worker: codeintel_dependency_index_queued_max_age
Dependency index job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_dependency_index_queued_duration_seconds_total{job=~"^worker.*"})
Worker: Codeintel: Dependency index jobs
worker: codeintel_dependency_index_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_codeintel_dependency_index_processor_handlers{job=~"^worker.*"})
worker: codeintel_dependency_index_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_dependency_index_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_index_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_index_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_index_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Auto-index scheduler
worker: codeintel_autoindexing_total
Auto-indexing job scheduler operations every 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
worker: codeintel_autoindexing_99th_percentile_duration
Aggregate successful auto-indexing job scheduler operation duration distribution over 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_autoindexing_duration_seconds_bucket{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
worker: codeintel_autoindexing_errors_total
Auto-indexing job scheduler operation errors every 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))
worker: codeintel_autoindexing_error_rate
Auto-indexing job scheduler operation error rate over 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100603
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) / (sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m])) + sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^worker.*"}[10m]))) * 100
Worker: Codeintel: dbstore stats
worker: codeintel_uploads_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100702
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100703
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_uploads_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_uploads_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100712
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100713
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: lsifstore stats
worker: codeintel_uploads_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100802
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100803
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_uploads_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_lsifstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_uploads_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_uploads_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100813
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_lsifstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_lsifstore_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Workerutil: lsif_dependency_indexes dbworker/store stats
worker: workerutil_dbworker_store_codeintel_dependency_index_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_workerutil_dbworker_store_codeintel_dependency_index_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100902
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_codeintel_dependency_index_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=100903
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_codeintel_dependency_index_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: gitserver client
worker: codeintel_gitserver_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_gitserver_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_gitserver_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_gitserver_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_gitserver_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_codeintel_gitserver_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_codeintel_gitserver_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeintel: Dependency repository insert
worker: codeintel_dependency_repos_total
Aggregate insert operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_99th_percentile_duration
Aggregate successful insert operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_errors_total
Aggregate insert operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_error_rate
Aggregate insert operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: codeintel_dependency_repos_total
Insert operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_99th_percentile_duration
99th percentile successful insert operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,scheme,new)(rate(src_codeintel_dependency_repos_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: codeintel_dependency_repos_errors_total
Insert operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))
worker: codeintel_dependency_repos_error_rate
Insert operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m])) / (sum by (scheme,new)(increase(src_codeintel_dependency_repos_total{job=~"^worker.*"}[5m])) + sum by (scheme,new)(increase(src_codeintel_dependency_repos_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Permissions
worker: user_success_syncs_total
Total number of user permissions syncs
Indicates the total number of user permissions sync completed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_repo_perms_syncer_success_syncs{type="user"})
worker: user_success_syncs
Number of user permissions syncs [5m]
Indicates the number of users permissions syncs completed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_repo_perms_syncer_success_syncs{type="user"}[5m]))
worker: user_initial_syncs
Number of first user permissions syncs [5m]
Indicates the number of permissions syncs done for the first time for the user.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_repo_perms_syncer_initial_syncs{type="user"}[5m]))
worker: repo_success_syncs_total
Total number of repo permissions syncs
Indicates the total number of repo permissions sync completed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_repo_perms_syncer_success_syncs{type="repo"})
worker: repo_success_syncs
Number of repo permissions syncs over 5m
Indicates the number of repos permissions syncs completed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101211
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_repo_perms_syncer_success_syncs{type="repo"}[5m]))
worker: repo_initial_syncs
Number of first repo permissions syncs over 5m
Indicates the number of permissions syncs done for the first time for the repo.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_repo_perms_syncer_initial_syncs{type="repo"}[5m]))
worker: users_consecutive_sync_delay
Max duration between two consecutive permissions sync for user
Indicates the max delay between two consecutive permissions sync for a user during the period.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101220
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time (src_repo_perms_syncer_perms_consecutive_sync_delay{type="user"} [1m]))
worker: repos_consecutive_sync_delay
Max duration between two consecutive permissions sync for repo
Indicates the max delay between two consecutive permissions sync for a repo during the period.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101221
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time (src_repo_perms_syncer_perms_consecutive_sync_delay{type="repo"} [1m]))
worker: users_first_sync_delay
Max duration between user creation and first permissions sync
Indicates the max delay between user creation and their permissions sync
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101230
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_repo_perms_syncer_perms_first_sync_delay{type="user"}[1m]))
worker: repos_first_sync_delay
Max duration between repo creation and first permissions sync over 1m
Indicates the max delay between repo creation and their permissions sync
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101231
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_repo_perms_syncer_perms_first_sync_delay{type="repo"}[1m]))
worker: permissions_found_count
Number of permissions found during user/repo permissions sync
Indicates the number permissions found during users/repos permissions sync.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101240
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (type) (src_repo_perms_syncer_perms_found)
worker: permissions_found_avg
Average number of permissions found during permissions sync per user/repo
Indicates the average number permissions found during permissions sync per user/repo.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101241
on your Sourcegraph instance.
Technical details
Query:
SHELLavg by (type) (src_repo_perms_syncer_perms_found)
worker: perms_syncer_outdated_perms
Number of entities with outdated permissions
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101250
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (type) (src_repo_perms_syncer_outdated_perms)
worker: perms_syncer_sync_duration
95th permissions sync duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101260
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, max by (le, type) (rate(src_repo_perms_syncer_sync_duration_seconds_bucket[1m])))
worker: perms_syncer_sync_errors
Permissions sync error rate
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101270
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (type) (ceil(rate(src_repo_perms_syncer_sync_errors_total[1m])))
worker: perms_syncer_scheduled_repos_total
Total number of repos scheduled for permissions sync
Indicates how many repositories have been scheduled for a permissions sync. More about repository permissions synchronization here
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101271
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(rate(src_repo_perms_syncer_schedule_repos_total[1m]))
Worker: Gitserver: Gitserver Client
worker: gitserver_client_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_total{job=~"^worker.*"}[5m]))
worker: gitserver_client_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: gitserver_client_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))
worker: gitserver_client_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^worker.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: gitserver_client_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_total{job=~"^worker.*"}[5m]))
worker: gitserver_client_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: gitserver_client_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))
worker: gitserver_client_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^worker.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: dbstore stats
worker: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))
worker: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: service stats
worker: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_total{job=~"^worker.*"}[5m]))
worker: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))
worker: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m]))
worker: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))
worker: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Workspace resolver dbstore
worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101601
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101603
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_changes_batch_spec_resolution_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Bulk operation processor dbstore
worker: workerutil_dbworker_store_batches_bulk_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batches_bulk_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101701
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batches_bulk_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_batches_bulk_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101702
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batches_bulk_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101703
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batches_bulk_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Changeset reconciler dbstore
worker: workerutil_dbworker_store_batches_reconciler_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batches_reconciler_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101801
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batches_reconciler_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_batches_reconciler_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101802
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batches_reconciler_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101803
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batches_reconciler_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Workspace execution dbstore
worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101901
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101902
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=101903
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_batch_spec_workspace_execution_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Batches: Executor jobs
worker: executor_queue_size
Unprocessed executor job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102000
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (queue)(src_executor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
worker: executor_queue_growth_rate
Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (queue)(increase(src_executor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m])) / sum by (queue)(increase(src_executor_processor_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m]))
worker: executor_queued_max_age
Unprocessed executor job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102002
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (queue)(src_executor_queued_duration_seconds_total{queue=~"batches",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
Worker: Codeintel: lsif_upload record resetter
worker: codeintel_background_upload_record_resets_total
Lsif upload records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_upload_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_record_reset_failures_total
Lsif upload records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_upload_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_upload_record_reset_errors_total
Lsif upload operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_upload_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: lsif_index record resetter
worker: codeintel_background_index_record_resets_total
Lsif index records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_index_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_index_record_reset_failures_total
Lsif index records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_index_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_index_record_reset_errors_total
Lsif index operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_index_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeintel: lsif_dependency_index record resetter
worker: codeintel_background_dependency_index_record_resets_total
Lsif dependency index records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_dependency_index_record_resets_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_dependency_index_record_reset_failures_total
Lsif dependency index records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_dependency_index_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: codeintel_background_dependency_index_record_reset_errors_total
Lsif dependency index operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_dependency_index_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeinsights: Query Runner Queue
worker: query_runner_worker_queue_size
Code insights query runner queue queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102400
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_query_runner_worker_total{job=~"^worker.*"})
worker: query_runner_worker_queue_growth_rate
Code insights query runner queue queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_query_runner_worker_total{job=~"^worker.*"}[30m])) / sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[30m]))
Worker: Codeinsights: insights queue processor
worker: query_runner_worker_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_query_runner_worker_processor_handlers{job=~"^worker.*"})
worker: query_runner_worker_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m]))
worker: query_runner_worker_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102511
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_query_runner_worker_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: query_runner_worker_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))
worker: query_runner_worker_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_query_runner_worker_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_query_runner_worker_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Codeinsights: code insights query runner queue record resetter
worker: query_runner_worker_record_resets_total
Insights query runner queue records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_query_runner_worker_record_resets_total{job=~"^worker.*"}[5m]))
worker: query_runner_worker_record_reset_failures_total
Insights query runner queue records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_query_runner_worker_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: query_runner_worker_record_reset_errors_total
Insights query runner queue operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_query_runner_worker_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Codeinsights: dbstore stats
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102702
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102703
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102711
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_insights_query_runner_jobs_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102712
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_insights_query_runner_jobs_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102713
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Code Insights queue utilization
worker: insights_queue_unutilized_size
Insights queue size that is not utilized (not processing)
Any value on this panel indicates code insights is not processing queries from its queue. This observable and alert only fire if there are records in the queue and there have been no dequeue attempts for 30 minutes.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102800
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_query_runner_worker_total{job=~"^worker.*"}) > 0 and on(job) sum by (op)(increase(src_workerutil_dbworker_store_insights_query_runner_jobs_store_total{job=~"^worker.*",op="Dequeue"}[5m])) < 1
Worker: Database connections
worker: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="worker"})
worker: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102901
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_open{app_name="worker"})
worker: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102910
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="worker"})
worker: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102911
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_idle{app_name="worker"})
worker: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102920
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="worker"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="worker"}[5m]))
worker: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102930
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="worker"}[5m]))
worker: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102931
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="worker"}[5m]))
worker: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=102932
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="worker"}[5m]))
Worker: Container monitoring (not available on server)
worker: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod worker
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p worker
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' worker
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the worker container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs worker
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103000
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^worker.*"}) > 60)
worker: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103001
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}
worker: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103002
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}
worker: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^worker.*"}[1h]) + rate(container_fs_writes_total{name=~"^worker.*"}[1h]))
Worker: Provisioning indicators (not available on server)
worker: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103100
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[1d])
worker: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103101
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[1d])
worker: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103110
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^worker.*"}[5m])
worker: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103111
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^worker.*"}[5m])
worker: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103112
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^worker.*"})
Worker: Golang runtime monitoring
worker: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103200
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_goroutines{job=~".*worker"})
worker: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103201
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_gc_duration_seconds{job=~".*worker"})
Worker: Kubernetes monitoring (only available on Kubernetes)
worker: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*worker"}) / count by (app) (up{app=~".*worker"}) * 100
Worker: Own: repo indexer dbstore
worker: workerutil_dbworker_store_own_background_worker_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_own_background_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_own_background_worker_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_workerutil_dbworker_store_own_background_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_own_background_worker_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_own_background_worker_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_workerutil_dbworker_store_own_background_worker_store_total{job=~"^worker.*"}[5m])) + sum(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
worker: workerutil_dbworker_store_own_background_worker_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_own_background_worker_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_workerutil_dbworker_store_own_background_worker_store_duration_seconds_bucket{job=~"^worker.*"}[5m])))
worker: workerutil_dbworker_store_own_background_worker_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m]))
worker: workerutil_dbworker_store_own_background_worker_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_workerutil_dbworker_store_own_background_worker_store_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Own: repo indexer worker queue
worker: own_background_worker_handlers
Handler active handlers
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_own_background_worker_processor_handlers{job=~"^worker.*"})
worker: own_background_worker_processor_total
Handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_own_background_worker_processor_total{job=~"^worker.*"}[5m]))
worker: own_background_worker_processor_99th_percentile_duration
Aggregate successful handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103511
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_own_background_worker_processor_duration_seconds_bucket{job=~"^worker.*"}[5m]))
worker: own_background_worker_processor_errors_total
Handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m]))
worker: own_background_worker_processor_error_rate
Handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_own_background_worker_processor_total{job=~"^worker.*"}[5m])) + sum(increase(src_own_background_worker_processor_errors_total{job=~"^worker.*"}[5m]))) * 100
Worker: Own: own repo indexer record resetter
worker: own_background_worker_record_resets_total
Own repo indexer queue records reset to queued state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_own_background_worker_record_resets_total{job=~"^worker.*"}[5m]))
worker: own_background_worker_record_reset_failures_total
Own repo indexer queue records reset to errored state every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_own_background_worker_record_reset_failures_total{job=~"^worker.*"}[5m]))
worker: own_background_worker_record_reset_errors_total
Own repo indexer queue operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_own_background_worker_record_reset_errors_total{job=~"^worker.*"}[5m]))
Worker: Own: index job scheduler
worker: own_background_index_scheduler_total
Own index job scheduler operations every 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_own_background_index_scheduler_total{job=~"^worker.*"}[10m]))
worker: own_background_index_scheduler_99th_percentile_duration
99th percentile successful own index job scheduler operation duration over 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103701
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_own_background_index_scheduler_duration_seconds_bucket{job=~"^worker.*"}[10m])))
worker: own_background_index_scheduler_errors_total
Own index job scheduler operation errors every 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103702
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m]))
worker: own_background_index_scheduler_error_rate
Own index job scheduler operation error rate over 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103703
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m])) / (sum by (op)(increase(src_own_background_index_scheduler_total{job=~"^worker.*"}[10m])) + sum by (op)(increase(src_own_background_index_scheduler_errors_total{job=~"^worker.*"}[10m]))) * 100
Worker: Site configuration client update latency
worker: worker_site_configuration_duration_since_last_successful_update_by_instance
Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "worker" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103800
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_conf_client_time_since_last_successful_update_seconds{job=~`^worker.*`,instance=~`${instance:regex}`}
worker: worker_site_configuration_duration_since_last_successful_update_by_instance
Maximum duration since last successful site configuration update (all "worker" instances)
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/worker/worker?viewPanel=103801
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`^worker.*`,instance=~`${instance:regex}`}[1m]))
Repo Updater
Manages interaction with code hosts, instructs Gitserver to update repositories.
To see this dashboard, visit /-/debug/grafana/d/repo-updater/repo-updater
on your Sourcegraph instance.
Repo Updater: Repositories
repo-updater: syncer_sync_last_time
Time since last sync
A high value here indicates issues synchronizing repo metadata. If the value is persistently high, make sure all external services have valid tokens.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(timestamp(vector(time()))) - max(src_repoupdater_syncer_sync_last_time)
repo-updater: src_repoupdater_max_sync_backoff
Time since oldest sync
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_repoupdater_max_sync_backoff)
repo-updater: src_repoupdater_syncer_sync_errors_total
Site level external service sync error rate
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (family) (rate(src_repoupdater_syncer_sync_errors_total{owner!="user",reason!="invalid_npm_path",reason!="internal_rate_limit"}[5m]))
repo-updater: syncer_sync_start
Repo metadata sync was started
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (family) (rate(src_repoupdater_syncer_start_sync{family="Syncer.SyncExternalService"}[9h0m0s]))
repo-updater: syncer_sync_duration
95th repositories sync duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, max by (le, family, success) (rate(src_repoupdater_syncer_sync_duration_seconds_bucket[1m])))
repo-updater: source_duration
95th repositories source duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, max by (le) (rate(src_repoupdater_source_duration_seconds_bucket[1m])))
repo-updater: syncer_synced_repos
Repositories synced
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100020
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(rate(src_repoupdater_syncer_synced_repos_total[1m]))
repo-updater: sourced_repos
Repositories sourced
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100021
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(rate(src_repoupdater_source_repos_total[1m]))
repo-updater: purge_failed
Repositories purge failed
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100030
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(rate(src_repoupdater_purge_failed[1m]))
repo-updater: sched_auto_fetch
Repositories scheduled due to hitting a deadline
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100040
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(rate(src_repoupdater_sched_auto_fetch[1m]))
repo-updater: sched_manual_fetch
Repositories scheduled due to user traffic
Check repo-updater logs if this value is persistently high. This does not indicate anything if there are no user added code hosts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100041
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(rate(src_repoupdater_sched_manual_fetch[1m]))
repo-updater: sched_known_repos
Repositories managed by the scheduler
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100050
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_repoupdater_sched_known_repos)
repo-updater: sched_update_queue_length
Rate of growth of update queue length over 5 minutes
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100051
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(deriv(src_repoupdater_sched_update_queue_length[5m]))
repo-updater: sched_loops
Scheduler loops
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100052
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(rate(src_repoupdater_sched_loops[1m]))
repo-updater: src_repoupdater_stale_repos
Repos that haven't been fetched in more than 8 hours
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100060
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_repoupdater_stale_repos)
repo-updater: sched_error
Repositories schedule error rate
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100061
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(rate(src_repoupdater_sched_error[1m]))
Repo Updater: External services
repo-updater: src_repoupdater_external_services_total
The total number of external services
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_repoupdater_external_services_total)
repo-updater: repoupdater_queued_sync_jobs_total
The total number of queued sync jobs
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_repoupdater_queued_sync_jobs_total)
repo-updater: repoupdater_completed_sync_jobs_total
The total number of completed sync jobs
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_repoupdater_completed_sync_jobs_total)
repo-updater: repoupdater_errored_sync_jobs_percentage
The percentage of external services that have failed their most recent sync
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_repoupdater_errored_sync_jobs_percentage)
repo-updater: github_graphql_rate_limit_remaining
Remaining calls to GitHub graphql API before hitting the rate limit
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100120
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (src_github_rate_limit_remaining_v2{resource="graphql"})
repo-updater: github_rest_rate_limit_remaining
Remaining calls to GitHub rest API before hitting the rate limit
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100121
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (src_github_rate_limit_remaining_v2{resource="rest"})
repo-updater: github_search_rate_limit_remaining
Remaining calls to GitHub search API before hitting the rate limit
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100122
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (src_github_rate_limit_remaining_v2{resource="search"})
repo-updater: github_graphql_rate_limit_wait_duration
Time spent waiting for the GitHub graphql API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100130
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="graphql"}[5m]))
repo-updater: github_rest_rate_limit_wait_duration
Time spent waiting for the GitHub rest API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100131
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="rest"}[5m]))
repo-updater: github_search_rate_limit_wait_duration
Time spent waiting for the GitHub search API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100132
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(name) (rate(src_github_rate_limit_wait_duration_seconds{resource="search"}[5m]))
repo-updater: gitlab_rest_rate_limit_remaining
Remaining calls to GitLab rest API before hitting the rate limit
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100140
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (src_gitlab_rate_limit_remaining{resource="rest"})
repo-updater: gitlab_rest_rate_limit_wait_duration
Time spent waiting for the GitLab rest API rate limiter
Indicates how long we`re waiting on the rate limit once it has been exceeded
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100141
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (rate(src_gitlab_rate_limit_wait_duration_seconds{resource="rest"}[5m]))
repo-updater: src_internal_rate_limit_wait_duration_bucket
95th percentile time spent successfully waiting on our internal rate limiter
Indicates how long we`re waiting on our internal rate limiter when communicating with a code host
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100150
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_internal_rate_limit_wait_duration_bucket{failed="false"}[5m])) by (le, urn))
repo-updater: src_internal_rate_limit_wait_error_count
Rate of failures waiting on our internal rate limiter
The rate at which we fail our internal rate limiter.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100151
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (urn) (rate(src_internal_rate_limit_wait_duration_count{failed="true"}[5m]))
Repo Updater: Gitserver: Gitserver Client
repo-updater: gitserver_client_total
Aggregate graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_total{job=~"^repo-updater.*"}[5m]))
repo-updater: gitserver_client_99th_percentile_duration
Aggregate successful graphql operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))
repo-updater: gitserver_client_errors_total
Aggregate graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: gitserver_client_error_rate
Aggregate graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_gitserver_client_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_gitserver_client_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_gitserver_client_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: gitserver_client_total
Graphql operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_total{job=~"^repo-updater.*"}[5m]))
repo-updater: gitserver_client_99th_percentile_duration
99th percentile successful graphql operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op,scope)(rate(src_gitserver_client_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))
repo-updater: gitserver_client_errors_total
Graphql operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: gitserver_client_error_rate
Graphql operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op,scope)(increase(src_gitserver_client_total{job=~"^repo-updater.*"}[5m])) + sum by (op,scope)(increase(src_gitserver_client_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Batches: dbstore stats
repo-updater: batches_dbstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: batches_dbstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_batches_dbstore_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))
repo-updater: batches_dbstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_dbstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_batches_dbstore_total{job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_batches_dbstore_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Batches: service stats
repo-updater: batches_service_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_batches_service_duration_seconds_bucket{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m])) / (sum(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m])) + sum(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: batches_service_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_batches_service_duration_seconds_bucket{job=~"^repo-updater.*"}[5m])))
repo-updater: batches_service_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))
repo-updater: batches_service_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_batches_service_total{job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_batches_service_errors_total{job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Codeintel: Coursier invocation stats
repo-updater: codeintel_coursier_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: codeintel_coursier_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_coursier_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m])))
repo-updater: codeintel_coursier_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_coursier_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_codeintel_coursier_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_codeintel_coursier_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Codeintel: npm invocation stats
repo-updater: codeintel_npm_total
Aggregate invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_99th_percentile_duration
Aggregate successful invocations operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_errors_total
Aggregate invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_error_rate
Aggregate invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100603
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
repo-updater: codeintel_npm_total
Invocations operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_99th_percentile_duration
99th percentile successful invocations operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_npm_duration_seconds_bucket{op!="RunCommand",job=~"^repo-updater.*"}[5m])))
repo-updater: codeintel_npm_errors_total
Invocations operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100612
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))
repo-updater: codeintel_npm_error_rate
Invocations operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100613
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) / (sum by (op)(increase(src_codeintel_npm_total{op!="RunCommand",job=~"^repo-updater.*"}[5m])) + sum by (op)(increase(src_codeintel_npm_errors_total{op!="RunCommand",job=~"^repo-updater.*"}[5m]))) * 100
Repo Updater: Repo Updater GRPC server metrics
repo-updater: repo_updater_grpc_request_rate_all_methods
Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m]))
repo-updater: repo_updater_grpc_request_rate_per_method
Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])) by (grpc_method)
repo-updater: repo_updater_error_percentage_all_methods
Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m]))) ))
repo-updater: repo_updater_grpc_error_percentage_per_method
Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${repo_updater_method:regex}`,grpc_code!="OK",instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])) by (grpc_method)) ))
repo-updater: repo_updater_p99_response_time_per_method
99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100720
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_p90_response_time_per_method
90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100721
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_p75_response_time_per_method
75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100722
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_p99_9_response_size_per_method
99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100730
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_p90_response_size_per_method
90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100731
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_p75_response_size_per_method
75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100732
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_p99_9_invididual_sent_message_size_per_method
99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100740
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_p90_invididual_sent_message_size_per_method
90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100741
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_p75_invididual_sent_message_size_per_method
75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100742
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))
repo-updater: repo_updater_grpc_response_stream_message_count_per_method
Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100750
on your Sourcegraph instance.
Technical details
Query:
SHELL((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])) by (grpc_method)))
repo-updater: repo_updater_grpc_all_codes_per_method
Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100760
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_handled_total{grpc_method=~`${repo_updater_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])) by (grpc_method, grpc_code)
Repo Updater: Repo Updater GRPC "internal error" metrics
repo-updater: repo_updater_grpc_clients_error_percentage_all_methods
Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "repo_updater" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))))))
repo-updater: repo_updater_grpc_clients_error_percentage_per_method
Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "repo_updater" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_method=~"${repo_updater_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_method=~"${repo_updater_method:regex}"}[2m])) by (grpc_method))))))
repo-updater: repo_updater_grpc_clients_all_codes_per_method
Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "repo_updater" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100802
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_method=~"${repo_updater_method:regex}"}[2m])) by (grpc_method, grpc_code))
repo-updater: repo_updater_grpc_clients_internal_error_percentage_all_methods
Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "repo_updater" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repo_updater" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))))))
repo-updater: repo_updater_grpc_clients_internal_error_percentage_per_method
Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "repo_updater" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repo_updater" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_method=~"${repo_updater_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_method=~"${repo_updater_method:regex}"}[2m])) by (grpc_method))))))
repo-updater: repo_updater_grpc_clients_internal_error_all_codes_per_method
Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "repo_updater" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "repo_updater" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"repoupdater.v1.RepoUpdaterService",is_internal_error="true",grpc_method=~"${repo_updater_method:regex}"}[2m])) by (grpc_method, grpc_code))
Repo Updater: Repo Updater GRPC retry metrics
repo-updater: repo_updater_grpc_clients_retry_percentage_across_all_methods
Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "repo_updater" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"repoupdater.v1.RepoUpdaterService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"repoupdater.v1.RepoUpdaterService"}[2m])))))))
repo-updater: repo_updater_grpc_clients_retry_percentage_per_method
Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "repo_updater" clients, broken out per method.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"repoupdater.v1.RepoUpdaterService",is_retried="true",grpc_method=~"${repo_updater_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_method=~"${repo_updater_method:regex}"}[2m])) by (grpc_method))))))
repo-updater: repo_updater_grpc_clients_retry_count_per_method
Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "repo_updater" clients, broken out per method
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=100902
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"repoupdater.v1.RepoUpdaterService",grpc_method=~"${repo_updater_method:regex}",is_retried="true"}[2m])) by (grpc_method))
Repo Updater: Site configuration client update latency
repo-updater: repo_updater_site_configuration_duration_since_last_successful_update_by_instance
Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "repo_updater" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_conf_client_time_since_last_successful_update_seconds{job=~`.*repo-updater`,instance=~`${instance:regex}`}
repo-updater: repo_updater_site_configuration_duration_since_last_successful_update_by_instance
Maximum duration since last successful site configuration update (all "repo_updater" instances)
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`.*repo-updater`,instance=~`${instance:regex}`}[1m]))
Repo Updater: HTTP handlers
repo-updater: healthy_request_rate
Requests per second, by route, when status code is 200
The number of healthy HTTP requests per second to internal HTTP api
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (route) (rate(src_http_request_duration_seconds_count{app="repo-updater",code=~"2.."}[5m]))
repo-updater: unhealthy_request_rate
Requests per second, by route, when status code is not 200
The number of unhealthy HTTP requests per second to internal HTTP api
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (route) (rate(src_http_request_duration_seconds_count{app="repo-updater",code!~"2.."}[5m]))
repo-updater: request_rate_by_code
Requests per second, by status code
The number of HTTP requests per second by code
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (code) (rate(src_http_request_duration_seconds_count{app="repo-updater"}[5m]))
repo-updater: 95th_percentile_healthy_requests
95th percentile duration by route, when status code is 200
The 95th percentile duration by route when the status code is 200
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101110
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="repo-updater",code=~"2.."}[5m])) by (le, route))
repo-updater: 95th_percentile_unhealthy_requests
95th percentile duration by route, when status code is not 200
The 95th percentile duration by route when the status code is not 200
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_http_request_duration_seconds_bucket{app="repo-updater",code!~"2.."}[5m])) by (le, route))
Repo Updater: Database connections
repo-updater: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="repo-updater"})
repo-updater: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_open{app_name="repo-updater"})
repo-updater: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="repo-updater"})
repo-updater: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101211
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_idle{app_name="repo-updater"})
repo-updater: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101220
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="repo-updater"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="repo-updater"}[5m]))
repo-updater: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101230
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="repo-updater"}[5m]))
repo-updater: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101231
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="repo-updater"}[5m]))
repo-updater: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101232
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="repo-updater"}[5m]))
Repo Updater: Container monitoring (not available on server)
repo-updater: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod repo-updater
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p repo-updater
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' repo-updater
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the repo-updater container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs repo-updater
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101300
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^repo-updater.*"}) > 60)
repo-updater: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101301
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}
repo-updater: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101302
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}
repo-updater: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^repo-updater.*"}[1h]) + rate(container_fs_writes_total{name=~"^repo-updater.*"}[1h]))
Repo Updater: Provisioning indicators (not available on server)
repo-updater: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101400
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}[1d])
repo-updater: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101401
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}[1d])
repo-updater: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101410
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^repo-updater.*"}[5m])
repo-updater: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101411
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^repo-updater.*"}[5m])
repo-updater: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101412
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^repo-updater.*"})
Repo Updater: Golang runtime monitoring
repo-updater: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101500
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_goroutines{job=~".*repo-updater"})
repo-updater: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101501
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_gc_duration_seconds{job=~".*repo-updater"})
Repo Updater: Kubernetes monitoring (only available on Kubernetes)
repo-updater: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/repo-updater/repo-updater?viewPanel=101600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*repo-updater"}) / count by (app) (up{app=~".*repo-updater"}) * 100
Searcher
Performs unindexed searches (diff and commit search, text search for unindexed branches).
To see this dashboard, visit /-/debug/grafana/d/searcher/searcher
on your Sourcegraph instance.
searcher: traffic
Requests per second by code over 10m
This graph is the average number of requests per second searcher is experiencing over the last 10 minutes.
The code is the HTTP Status code. 200 is success. We have a special code "canceled" which is common when doing a large search request and we find enough results before searching all possible repos.
Note: A search query is translated into an unindexed search query per unique (repo, commit). This means a single user query may result in thousands of requests to searcher.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (code) (rate(searcher_service_request_total{instance=~`${instance:regex}`}[10m]))
searcher: replica_traffic
Requests per second per replica over 10m
This graph is the average number of requests per second searcher is experiencing over the last 10 minutes broken down per replica.
The code is the HTTP Status code. 200 is success. We have a special code "canceled" which is common when doing a large search request and we find enough results before searching all possible repos.
Note: A search query is translated into an unindexed search query per unique (repo, commit). This means a single user query may result in thousands of requests to searcher.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (rate(searcher_service_request_total{instance=~`${instance:regex}`}[10m]))
searcher: concurrent_requests
Amount of in-flight unindexed search requests (per instance)
This graph is the amount of in-flight unindexed search requests per instance. Consistently high numbers here indicate you may need to scale out searcher.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (searcher_service_running{instance=~`${instance:regex}`})
searcher: unindexed_search_request_errors
Unindexed search request errors every 5m by code
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (code)(increase(searcher_service_request_total{code!="200",code!="canceled",instance=~`${instance:regex}`}[5m])) / ignoring(code) group_left sum(increase(searcher_service_request_total{instance=~`${instance:regex}`}[5m])) * 100
Searcher: Cache store
searcher: store_fetching
Amount of in-flight unindexed search requests fetching code from gitserver (per instance)
Before we can search a commit we fetch the code from gitserver then cache it for future search requests. This graph is the current number of search requests which are in the state of fetching code from gitserver.
Generally this number should remain low since fetching code is fast, but expect bursts. In the case of instances with a monorepo you would expect this number to stay low for the duration of fetching the code (which in some cases can take many minutes).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (searcher_store_fetching{instance=~`${instance:regex}`})
searcher: store_fetching_waiting
Amount of in-flight unindexed search requests waiting to fetch code from gitserver (per instance)
We limit the number of requests which can fetch code to prevent overwhelming gitserver. This gauge is the number of requests waiting to be allowed to speak to gitserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (searcher_store_fetch_queue_size{instance=~`${instance:regex}`})
searcher: store_fetching_fail
Amount of unindexed search requests that failed while fetching code from gitserver over 10m (per instance)
This graph should be zero since fetching happens in the background and will not be influenced by user timeouts/etc. Expected upticks in this graph are during gitserver rollouts. If you regularly see this graph have non-zero values please reach out to support.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (rate(searcher_store_fetch_failed{instance=~`${instance:regex}`}[10m]))
Searcher: Index use
searcher: searcher_hybrid_final_state_total
Hybrid search final state over 10m
This graph is about our interactions with the search index (zoekt) to help complete unindexed search requests. Searcher will use indexed search for the files that have not changed between the unindexed commit and the index.
This graph should mostly be "success". The next most common state should be "search-canceled" which happens when result limits are hit or the user starts a new search. Finally the next most common should be "diff-too-large", which happens if the commit is too far from the indexed commit. Otherwise other state should be rare and likely are a sign for further investigation.
Note: On sourcegraph.com "zoekt-list-missing" is also common due to it indexing a subset of repositories. Otherwise every other state should occur rarely.
For a full list of possible state see recordHybridFinalState.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (state)(increase(searcher_hybrid_final_state_total{instance=~`${instance:regex}`}[10m]))
searcher: searcher_hybrid_retry_total
Hybrid search retrying over 10m
Expectation is that this graph should mostly be 0. It will trigger if a user manages to do a search and the underlying index changes while searching or Zoekt goes down. So occasional bursts can be expected, but if this graph is regularly above 0 it is a sign for further investigation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (reason)(increase(searcher_hybrid_retry_total{instance=~`${instance:regex}`}[10m]))
Searcher: Cache disk I/O metrics
searcher: cache_disk_reads_sec
Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))
searcher: cache_disk_writes_sec
Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))
searcher: cache_disk_read_throughput
Read throughput over 1m (per instance)
The amount of data that was read from the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))
searcher: cache_disk_write_throughput
Write throughput over 1m (per instance)
The amount of data that was written to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))
searcher: cache_disk_read_duration
Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100320
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
searcher: cache_disk_write_duration
Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100321
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
searcher: cache_disk_read_request_size
Average read request size over 1m (per instance)
The average size of read requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100330
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
searcher: cache_disk_write_request_size)
Average write request size over 1m (per instance)
The average size of write requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100331
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
searcher: cache_disk_reads_merged_sec
Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100340
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~`node-exporter.*`}[1m])))))
searcher: cache_disk_writes_merged_sec
Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100341
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~`node-exporter.*`}[1m])))))
searcher: cache_disk_average_queue_size
Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), searcher could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device searcher is using, not the load searcher is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100350
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (searcher_mount_point_info{mount_name="cacheDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~`node-exporter.*`}[1m])))))
Searcher: Searcher GRPC server metrics
searcher: searcher_grpc_request_rate_all_methods
Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m]))
searcher: searcher_grpc_request_rate_per_method
Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method)
searcher: searcher_error_percentage_all_methods
Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m]))) ))
searcher: searcher_grpc_error_percentage_per_method
Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${searcher_method:regex}`,grpc_code!="OK",instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method)) ))
searcher: searcher_p99_response_time_per_method
99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100420
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_p90_response_time_per_method
90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100421
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_p75_response_time_per_method
75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100422
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_p99_9_response_size_per_method
99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100430
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_p90_response_size_per_method
90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100431
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_p75_response_size_per_method
75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100432
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_p99_9_invididual_sent_message_size_per_method
99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100440
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_p90_invididual_sent_message_size_per_method
90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100441
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_p75_invididual_sent_message_size_per_method
75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100442
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])))
searcher: searcher_grpc_response_stream_message_count_per_method
Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100450
on your Sourcegraph instance.
Technical details
Query:
SHELL((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method)))
searcher: searcher_grpc_all_codes_per_method
Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100460
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_handled_total{grpc_method=~`${searcher_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"searcher.v1.SearcherService"}[2m])) by (grpc_method, grpc_code)
Searcher: Searcher GRPC "internal error" metrics
searcher: searcher_grpc_clients_error_percentage_all_methods
Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "searcher" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService"}[2m])))))))
searcher: searcher_grpc_clients_error_percentage_per_method
Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "searcher" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method))))))
searcher: searcher_grpc_clients_all_codes_per_method
Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "searcher" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method, grpc_code))
searcher: searcher_grpc_clients_internal_error_percentage_all_methods
Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "searcher" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService"}[2m])))))))
searcher: searcher_grpc_clients_internal_error_percentage_per_method
Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "searcher" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method))))))
searcher: searcher_grpc_clients_internal_error_all_codes_per_method
Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "searcher" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "searcher" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"searcher.v1.SearcherService",is_internal_error="true",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method, grpc_code))
Searcher: Searcher GRPC retry metrics
searcher: searcher_grpc_clients_retry_percentage_across_all_methods
Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "searcher" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService"}[2m])))))))
searcher: searcher_grpc_clients_retry_percentage_per_method
Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "searcher" clients, broken out per method.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService",is_retried="true",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}"}[2m])) by (grpc_method))))))
searcher: searcher_grpc_clients_retry_count_per_method
Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "searcher" clients, broken out per method
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100602
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"searcher.v1.SearcherService",grpc_method=~"${searcher_method:regex}",is_retried="true"}[2m])) by (grpc_method))
Searcher: Site configuration client update latency
searcher: searcher_site_configuration_duration_since_last_successful_update_by_instance
Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "searcher" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_conf_client_time_since_last_successful_update_seconds{job=~`.*searcher`,instance=~`${instance:regex}`}
searcher: searcher_site_configuration_duration_since_last_successful_update_by_instance
Maximum duration since last successful site configuration update (all "searcher" instances)
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`.*searcher`,instance=~`${instance:regex}`}[1m]))
Searcher: Database connections
searcher: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="searcher"})
searcher: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_open{app_name="searcher"})
searcher: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="searcher"})
searcher: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_idle{app_name="searcher"})
searcher: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100820
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="searcher"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="searcher"}[5m]))
searcher: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100830
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="searcher"}[5m]))
searcher: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100831
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="searcher"}[5m]))
searcher: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100832
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="searcher"}[5m]))
Searcher: Container monitoring (not available on server)
searcher: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod searcher
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p searcher
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' searcher
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the searcher container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs searcher
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^searcher.*"}) > 60)
searcher: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}
searcher: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100902
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}
searcher: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=100903
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^searcher.*"}[1h]) + rate(container_fs_writes_total{name=~"^searcher.*"}[1h]))
Searcher: Provisioning indicators (not available on server)
searcher: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}[1d])
searcher: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}[1d])
searcher: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=101010
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^searcher.*"}[5m])
searcher: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=101011
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^searcher.*"}[5m])
searcher: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=101012
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^searcher.*"})
Searcher: Golang runtime monitoring
searcher: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_goroutines{job=~".*searcher"})
searcher: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_gc_duration_seconds{job=~".*searcher"})
Searcher: Kubernetes monitoring (only available on Kubernetes)
searcher: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/searcher/searcher?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*searcher"}) / count by (app) (up{app=~".*searcher"}) * 100
Symbols
Handles symbol searches for unindexed branches.
To see this dashboard, visit /-/debug/grafana/d/symbols/symbols
on your Sourcegraph instance.
Symbols: Codeintel: Symbols API
symbols: codeintel_symbols_api_total
Aggregate API operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_99th_percentile_duration
Aggregate successful API operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_symbols_api_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_errors_total
Aggregate API operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_error_rate
Aggregate API operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_api_total
API operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,parseAmount)(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_99th_percentile_duration
99th percentile successful API operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op,parseAmount)(rate(src_codeintel_symbols_api_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_api_errors_total
API operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_api_error_rate
API operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op,parseAmount)(increase(src_codeintel_symbols_api_total{job=~"^symbols.*"}[5m])) + sum by (op,parseAmount)(increase(src_codeintel_symbols_api_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols parser
symbols: symbols
In-flight parse jobs
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_symbols_parsing{job=~"^symbols.*"})
symbols: symbols
Parser queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_symbols_parse_queue_size{job=~"^symbols.*"})
symbols: symbols
Parse queue timeouts
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_symbols_parse_queue_timeouts_total{job=~"^symbols.*"})
symbols: symbols
Parse failures every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_codeintel_symbols_parse_failed_total{job=~"^symbols.*"}[5m])
symbols: codeintel_symbols_parser_total
Aggregate parser operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_99th_percentile_duration
Aggregate successful parser operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_symbols_parser_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_errors_total
Aggregate parser operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_error_rate
Aggregate parser operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_parser_total
Parser operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100120
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_99th_percentile_duration
99th percentile successful parser operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100121
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_parser_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_parser_errors_total
Parser operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100122
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_parser_error_rate
Parser operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100123
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_parser_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_parser_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols cache janitor
symbols: symbols
Size in bytes of the on-disk cache
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_codeintel_symbols_store_cache_size_bytes
symbols: symbols
Cache eviction operations every 5m
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_codeintel_symbols_store_evictions_total[5m])
symbols: symbols
Cache eviction operation errors every 5m
no
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_codeintel_symbols_store_errors_total[5m])
Symbols: Codeintel: Symbols repository fetcher
symbols: symbols
In-flight repository fetch operations
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_codeintel_symbols_fetching
symbols: symbols
Repository fetch queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_symbols_fetch_queue_size{job=~"^symbols.*"})
symbols: codeintel_symbols_repository_fetcher_total
Aggregate fetcher operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_99th_percentile_duration
Aggregate successful fetcher operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_symbols_repository_fetcher_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_errors_total
Aggregate fetcher operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_error_rate
Aggregate fetcher operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_repository_fetcher_total
Fetcher operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100320
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_99th_percentile_duration
99th percentile successful fetcher operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100321
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_repository_fetcher_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_repository_fetcher_errors_total
Fetcher operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100322
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_repository_fetcher_error_rate
Fetcher operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100323
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_repository_fetcher_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_repository_fetcher_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Codeintel: Symbols gitserver client
symbols: codeintel_symbols_gitserver_total
Aggregate gitserver client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_99th_percentile_duration
Aggregate successful gitserver client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_symbols_gitserver_duration_seconds_bucket{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_errors_total
Aggregate gitserver client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_error_rate
Aggregate gitserver client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m])) / (sum(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m])) + sum(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))) * 100
symbols: codeintel_symbols_gitserver_total
Gitserver client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_99th_percentile_duration
99th percentile successful gitserver client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_symbols_gitserver_duration_seconds_bucket{job=~"^symbols.*"}[5m])))
symbols: codeintel_symbols_gitserver_errors_total
Gitserver client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))
symbols: codeintel_symbols_gitserver_error_rate
Gitserver client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m])) / (sum by (op)(increase(src_codeintel_symbols_gitserver_total{job=~"^symbols.*"}[5m])) + sum by (op)(increase(src_codeintel_symbols_gitserver_errors_total{job=~"^symbols.*"}[5m]))) * 100
Symbols: Rockskip
symbols: p95_rockskip_search_request_duration
95th percentile search request duration over 5m
The 95th percentile duration of search requests to Rockskip in seconds. Lower is better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_rockskip_service_search_request_duration_seconds_bucket[5m])) by (le))
symbols: rockskip_in_flight_search_requests
Number of in-flight search requests
The number of search requests currently being processed by Rockskip. If there is not much traffic and the requests are served very fast relative to the polling window of Prometheus, it possible that that this number is 0 even if there are search requests being processed.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_rockskip_service_in_flight_search_requests)
symbols: rockskip_search_request_errors
Search request errors every 5m
The number of search requests that returned an error in the last 5 minutes. The errors tracked here are all application errors, grpc errors are not included. We generally want this to be 0.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_rockskip_service_search_request_errors[5m]))
symbols: p95_rockskip_index_job_duration
95th percentile index job duration over 5m
The 95th percentile duration of index jobs in seconds. The range of values is very large, because the metric measure quick delta updates as well as full index jobs. Lower is better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.95, sum(rate(src_rockskip_service_index_job_duration_seconds_bucket[5m])) by (le))
symbols: rockskip_in_flight_index_jobs
Number of in-flight index jobs
The number of index jobs currently being processed by Rockskip. This includes delta updates as well as full index jobs.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_rockskip_service_in_flight_index_jobs)
symbols: rockskip_index_job_errors
Index job errors every 5m
The number of index jobs that returned an error in the last 5 minutes. If the errors are persistent, users will see alerts in the UI. The service logs will contain more detailed information about the kind of errors. We generally want this to be 0.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_rockskip_service_index_job_errors[5m]))
symbols: rockskip_number_of_repos_indexed
Number of repositories indexed by Rockskip
The number of repositories indexed by Rockskip. Apart from an initial transient phase in which many repos are being indexed, this number should be low and relatively stable and only increase by small increments. To verify if this number makes sense, compare ROCKSKIP_MIN_REPO_SIZE_MB with the repository sizes reported by gitserver_repos table.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100520
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_rockskip_service_repos_indexed)
Symbols: Symbols GRPC server metrics
symbols: symbols_grpc_request_rate_all_methods
Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m]))
symbols: symbols_grpc_request_rate_per_method
Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])) by (grpc_method)
symbols: symbols_error_percentage_all_methods
Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m]))) ))
symbols: symbols_grpc_error_percentage_per_method
Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${symbols_method:regex}`,grpc_code!="OK",instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])) by (grpc_method)) ))
symbols: symbols_p99_response_time_per_method
99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100620
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_p90_response_time_per_method
90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100621
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_p75_response_time_per_method
75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100622
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_p99_9_response_size_per_method
99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100630
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_p90_response_size_per_method
90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100631
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_p75_response_size_per_method
75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100632
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_p99_9_invididual_sent_message_size_per_method
99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100640
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_p90_invididual_sent_message_size_per_method
90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100641
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_p75_invididual_sent_message_size_per_method
75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100642
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(src_grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])))
symbols: symbols_grpc_response_stream_message_count_per_method
Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100650
on your Sourcegraph instance.
Technical details
Query:
SHELL((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])) by (grpc_method)))
symbols: symbols_grpc_all_codes_per_method
Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100660
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_handled_total{grpc_method=~`${symbols_method:regex}`,instance=~`${instance:regex}`,grpc_service=~"symbols.v1.SymbolsService"}[2m])) by (grpc_method, grpc_code)
Symbols: Symbols GRPC "internal error" metrics
symbols: symbols_grpc_clients_error_percentage_all_methods
Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "symbols" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService"}[2m])))))))
symbols: symbols_grpc_clients_error_percentage_per_method
Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "symbols" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService",grpc_method=~"${symbols_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService",grpc_method=~"${symbols_method:regex}"}[2m])) by (grpc_method))))))
symbols: symbols_grpc_clients_all_codes_per_method
Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "symbols" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100702
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService",grpc_method=~"${symbols_method:regex}"}[2m])) by (grpc_method, grpc_code))
symbols: symbols_grpc_clients_internal_error_percentage_all_methods
Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "symbols" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "symbols" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService"}[2m])))))))
symbols: symbols_grpc_clients_internal_error_percentage_per_method
Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "symbols" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "symbols" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService",grpc_method=~"${symbols_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService",grpc_method=~"${symbols_method:regex}"}[2m])) by (grpc_method))))))
symbols: symbols_grpc_clients_internal_error_all_codes_per_method
Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "symbols" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "symbols" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100712
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"symbols.v1.SymbolsService",is_internal_error="true",grpc_method=~"${symbols_method:regex}"}[2m])) by (grpc_method, grpc_code))
Symbols: Symbols GRPC retry metrics
symbols: symbols_grpc_clients_retry_percentage_across_all_methods
Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "symbols" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"symbols.v1.SymbolsService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"symbols.v1.SymbolsService"}[2m])))))))
symbols: symbols_grpc_clients_retry_percentage_per_method
Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "symbols" clients, broken out per method.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"symbols.v1.SymbolsService",is_retried="true",grpc_method=~"${symbols_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"symbols.v1.SymbolsService",grpc_method=~"${symbols_method:regex}"}[2m])) by (grpc_method))))))
symbols: symbols_grpc_clients_retry_count_per_method
Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "symbols" clients, broken out per method
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100802
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"symbols.v1.SymbolsService",grpc_method=~"${symbols_method:regex}",is_retried="true"}[2m])) by (grpc_method))
Symbols: Site configuration client update latency
symbols: symbols_site_configuration_duration_since_last_successful_update_by_instance
Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "symbols" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_conf_client_time_since_last_successful_update_seconds{job=~`.*symbols`,instance=~`${instance:regex}`}
symbols: symbols_site_configuration_duration_since_last_successful_update_by_instance
Maximum duration since last successful site configuration update (all "symbols" instances)
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`.*symbols`,instance=~`${instance:regex}`}[1m]))
Symbols: Database connections
symbols: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="symbols"})
symbols: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_open{app_name="symbols"})
symbols: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="symbols"})
symbols: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_idle{app_name="symbols"})
symbols: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101020
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="symbols"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="symbols"}[5m]))
symbols: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101030
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="symbols"}[5m]))
symbols: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101031
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="symbols"}[5m]))
symbols: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101032
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="symbols"}[5m]))
Symbols: Container monitoring (not available on server)
symbols: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod symbols
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p symbols
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' symbols
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the symbols container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs symbols
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^symbols.*"}) > 60)
symbols: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}
symbols: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101102
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}
symbols: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^symbols.*"}[1h]) + rate(container_fs_writes_total{name=~"^symbols.*"}[1h]))
Symbols: Provisioning indicators (not available on server)
symbols: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}[1d])
symbols: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101201
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}[1d])
symbols: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101210
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^symbols.*"}[5m])
symbols: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101211
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^symbols.*"}[5m])
symbols: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101212
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^symbols.*"})
Symbols: Golang runtime monitoring
symbols: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101300
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_goroutines{job=~".*symbols"})
symbols: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101301
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_gc_duration_seconds{job=~".*symbols"})
Symbols: Kubernetes monitoring (only available on Kubernetes)
symbols: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/symbols/symbols?viewPanel=101400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*symbols"}) / count by (app) (up{app=~".*symbols"}) * 100
Syntect Server
Handles syntax highlighting for code files.
To see this dashboard, visit /-/debug/grafana/d/syntect-server/syntect-server
on your Sourcegraph instance.
syntect-server: syntax_highlighting_errors
Syntax highlighting errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_syntax_highlighting_requests{status="error"}[5m])) / sum(increase(src_syntax_highlighting_requests[5m])) * 100
syntect-server: syntax_highlighting_timeouts
Syntax highlighting timeouts every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_syntax_highlighting_requests{status="timeout"}[5m])) / sum(increase(src_syntax_highlighting_requests[5m])) * 100
syntect-server: syntax_highlighting_panics
Syntax highlighting panics every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_syntax_highlighting_requests{status="panic"}[5m]))
syntect-server: syntax_highlighting_worker_deaths
Syntax highlighter worker deaths every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_syntax_highlighting_requests{status="hss_worker_timeout"}[5m]))
Syntect Server: Container monitoring (not available on server)
syntect-server: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod syntect-server
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p syntect-server
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' syntect-server
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the syntect-server container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs syntect-server
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^syntect-server.*"}) > 60)
syntect-server: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}
syntect-server: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}
syntect-server: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^syntect-server.*"}[1h]) + rate(container_fs_writes_total{name=~"^syntect-server.*"}[1h]))
Syntect Server: Provisioning indicators (not available on server)
syntect-server: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}[1d])
syntect-server: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}[1d])
syntect-server: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^syntect-server.*"}[5m])
syntect-server: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^syntect-server.*"}[5m])
syntect-server: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^syntect-server.*"})
Syntect Server: Kubernetes monitoring (only available on Kubernetes)
syntect-server: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/syntect-server/syntect-server?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*syntect-server"}) / count by (app) (up{app=~".*syntect-server"}) * 100
Zoekt
Indexes repositories, populates the search index, and responds to indexed search queries.
To see this dashboard, visit /-/debug/grafana/d/zoekt/zoekt
on your Sourcegraph instance.
zoekt: total_repos_aggregate
Total number of repos (aggregate)
Sudden changes can be caused by indexing configuration changes.
Additionally, a discrepancy between "index_num_assigned" and "index_queue_cap" could indicate a bug.
Legend:
- index_num_assigned: # of repos assigned to Zoekt
- index_num_indexed: # of repos Zoekt has indexed
- index_queue_cap: # of repos Zoekt is aware of, including those that it has finished indexing
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (__name__) ({__name__=~"index_num_assigned|index_num_indexed|index_queue_cap"})
zoekt: total_repos_per_instance
Total number of repos (per instance)
Sudden changes can be caused by indexing configuration changes.
Additionally, a discrepancy between "index_num_assigned" and "index_queue_cap" could indicate a bug.
Legend:
- index_num_assigned: # of repos assigned to Zoekt
- index_num_indexed: # of repos Zoekt has indexed
- index_queue_cap: # of repos Zoekt is aware of, including those that it has finished processing
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (__name__, instance) ({__name__=~"index_num_assigned|index_num_indexed|index_queue_cap",instance=~"${instance:regex}"})
zoekt: repos_stopped_tracking_total_aggregate
The number of repositories we stopped tracking over 5m (aggregate)
Repositories we stop tracking are soft-deleted during the next cleanup job.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(index_num_stopped_tracking_total[5m]))
zoekt: repos_stopped_tracking_total_per_instance
The number of repositories we stopped tracking over 5m (per instance)
Repositories we stop tracking are soft-deleted during the next cleanup job.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (increase(index_num_stopped_tracking_total{instance=~`${instance:regex}`}[5m]))
zoekt: average_resolve_revision_duration
Average resolve revision duration over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100020
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(resolve_revision_seconds_sum[5m])) / sum(rate(resolve_revision_seconds_count[5m]))
zoekt: get_index_options_error_increase
The number of repositories we failed to get indexing options over 5m
When considering indexing a repository we ask for the index configuration from frontend per repository. The most likely reason this would fail is failing to resolve branch names to git SHAs.
This value can spike up during deployments/etc. Only if you encounter sustained periods of errors is there an underlying issue. When sustained this indicates repositories will not get updated indexes.
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100021
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(get_index_options_error_total[5m]))
Zoekt: Search requests
zoekt: indexed_search_request_duration_p99_aggregate
99th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 99th percentile of search request durations over the last minute (aggregated across all instances).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
zoekt: indexed_search_request_duration_p90_aggregate
90th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 90th percentile of search request durations over the last minute (aggregated across all instances).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
zoekt: indexed_search_request_duration_p75_aggregate
75th percentile indexed search duration over 1m (aggregate)
This dashboard shows the 75th percentile of search request durations over the last minute (aggregated across all instances).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name)(rate(zoekt_search_duration_seconds_bucket[1m])))
zoekt: indexed_search_request_duration_p99_by_instance
99th percentile indexed search duration over 1m (per instance)
This dashboard shows the 99th percentile of search request durations over the last minute (broken out per instance).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~`${instance:regex}`}[1m])))
zoekt: indexed_search_request_duration_p90_by_instance
90th percentile indexed search duration over 1m (per instance)
This dashboard shows the 90th percentile of search request durations over the last minute (broken out per instance).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~`${instance:regex}`}[1m])))
zoekt: indexed_search_request_duration_p75_by_instance
75th percentile indexed search duration over 1m (per instance)
This dashboard shows the 75th percentile of search request durations over the last minute (broken out per instance).
Large duration spikes can be an indicator of saturation and / or a performance regression.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, instance)(rate(zoekt_search_duration_seconds_bucket{instance=~`${instance:regex}`}[1m])))
zoekt: indexed_search_num_concurrent_requests_aggregate
Amount of in-flight indexed search requests (aggregate)
This dashboard shows the current number of indexed search requests that are in-flight, aggregated across all instances.
In-flight search requests include both running and queued requests.
The number of in-flight requests can serve as a proxy for the general load that webserver instances are under.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100120
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (name) (zoekt_search_running)
zoekt: indexed_search_num_concurrent_requests_by_instance
Amount of in-flight indexed search requests (per instance)
This dashboard shows the current number of indexed search requests that are-flight, broken out per instance.
In-flight search requests include both running and queued requests.
The number of in-flight requests can serve as a proxy for the general load that webserver instances are under.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100121
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance, name) (zoekt_search_running{instance=~`${instance:regex}`})
zoekt: indexed_search_concurrent_request_growth_rate_1m_aggregate
Rate of growth of in-flight indexed search requests over 1m (aggregate)
This dashboard shows the rate of growth of in-flight requests, aggregated across all instances.
In-flight search requests include both running and queued requests.
This metric gives a notion of how quickly the indexed-search backend is working through its request load (taking into account the request arrival rate and processing time). A sustained high rate of growth can indicate that the indexed-search backend is saturated.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100130
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (name) (deriv(zoekt_search_running[1m]))
zoekt: indexed_search_concurrent_request_growth_rate_1m_per_instance
Rate of growth of in-flight indexed search requests over 1m (per instance)
This dashboard shows the rate of growth of in-flight requests, broken out per instance.
In-flight search requests include both running and queued requests.
This metric gives a notion of how quickly the indexed-search backend is working through its request load (taking into account the request arrival rate and processing time). A sustained high rate of growth can indicate that the indexed-search backend is saturated.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100131
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance) (deriv(zoekt_search_running[1m]))
zoekt: indexed_search_request_errors
Indexed search request errors every 5m by code
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100140
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (code)(increase(src_zoekt_request_duration_seconds_count{code!~"2.."}[5m])) / ignoring(code) group_left sum(increase(src_zoekt_request_duration_seconds_count[5m])) * 100
zoekt: zoekt_shards_sched
Current number of zoekt scheduler processes in a state
Each ongoing search request starts its life as an interactive query. If it takes too long it becomes a batch query. Between state transitions it can be queued.
If you have a high number of batch queries it is a sign there is a large load of slow queries. Alternatively your systems are underprovisioned and normal search queries are taking too long.
For a full explanation of the states see https://github.com/sourcegraph/zoekt/blob/930cd1c28917e64c87f0ce354a0fd040877cbba1/shards/sched.go#L311-L340
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100150
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (type, state) (zoekt_shards_sched)
zoekt: zoekt_shards_sched_total
Rate of zoekt scheduler process state transitions in the last 5m
Each ongoing search request starts its life as an interactive query. If it takes too long it becomes a batch query. Between state transitions it can be queued.
If you have a high number of batch queries it is a sign there is a large load of slow queries. Alternatively your systems are underprovisioned and normal search queries are taking too long.
For a full explanation of the states see https://github.com/sourcegraph/zoekt/blob/930cd1c28917e64c87f0ce354a0fd040877cbba1/shards/sched.go#L311-L340
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100151
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (type, state) (rate(zoekt_shards_sched[5m]))
Zoekt: Git fetch durations
zoekt: 90th_percentile_successful_git_fetch_durations_5m
90th percentile successful git fetch durations over 5m
Long git fetch times can be a leading indicator of saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name)(rate(index_fetch_seconds_bucket{success="true"}[5m])))
zoekt: 90th_percentile_failed_git_fetch_durations_5m
90th percentile failed git fetch durations over 5m
Long git fetch times can be a leading indicator of saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name)(rate(index_fetch_seconds_bucket{success="false"}[5m])))
Zoekt: Indexing results
zoekt: repo_index_state_aggregate
Index results state count over 5m (aggregate)
This dashboard shows the outcomes of recently completed indexing jobs across all index-server instances.
A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts.
Legend:
- fail -> the indexing jobs failed
- success -> the indexing job succeeded and the index was updated
- success_meta -> the indexing job succeeded, but only metadata was updated
- noop -> the indexing job succeed, but we didn`t need to update anything
- empty -> the indexing job succeeded, but the index was empty (i.e. the repository is empty)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (state) (increase(index_repo_seconds_count[5m]))
zoekt: repo_index_state_per_instance
Index results state count over 5m (per instance)
This dashboard shows the outcomes of recently completed indexing jobs, split out across each index-server instance.
(You can use the "instance" filter at the top of the page to select a particular instance.)
A persistent failing state indicates some repositories cannot be indexed, perhaps due to size and timeouts.
Legend:
- fail -> the indexing jobs failed
- success -> the indexing job succeeded and the index was updated
- success_meta -> the indexing job succeeded, but only metadata was updated
- noop -> the indexing job succeed, but we didn`t need to update anything
- empty -> the indexing job succeeded, but the index was empty (i.e. the repository is empty)
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (instance, state) (increase(index_repo_seconds_count{instance=~`${instance:regex}`}[5m]))
zoekt: repo_index_success_speed_heatmap
Successful indexing durations
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le, state) (increase(index_repo_seconds_bucket{state="success"}[$__rate_interval]))
zoekt: repo_index_fail_speed_heatmap
Failed indexing durations
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le, state) (increase(index_repo_seconds_bucket{state="fail"}[$__rate_interval]))
zoekt: repo_index_success_speed_p99
99th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p99 duration of successful indexing jobs aggregated across all Zoekt instances.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100320
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
zoekt: repo_index_success_speed_p90
90th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p90 duration of successful indexing jobs aggregated across all Zoekt instances.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100321
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
zoekt: repo_index_success_speed_p75
75th percentile successful indexing durations over 5m (aggregate)
This dashboard shows the p75 duration of successful indexing jobs aggregated across all Zoekt instances.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100322
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name)(rate(index_repo_seconds_bucket{state="success"}[5m])))
zoekt: repo_index_success_speed_p99_per_instance
99th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p99 duration of successful indexing jobs broken out per Zoekt instance.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100330
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~`${instance:regex}`}[5m])))
zoekt: repo_index_success_speed_p90_per_instance
90th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p90 duration of successful indexing jobs broken out per Zoekt instance.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100331
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~`${instance:regex}`}[5m])))
zoekt: repo_index_success_speed_p75_per_instance
75th percentile successful indexing durations over 5m (per instance)
This dashboard shows the p75 duration of successful indexing jobs broken out per Zoekt instance.
Latency increases can indicate bottlenecks in the indexserver.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100332
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, instance)(rate(index_repo_seconds_bucket{state="success",instance=~`${instance:regex}`}[5m])))
zoekt: repo_index_failed_speed_p99
99th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p99 duration of failed indexing jobs aggregated across all Zoekt instances.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100340
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
zoekt: repo_index_failed_speed_p90
90th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p90 duration of failed indexing jobs aggregated across all Zoekt instances.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100341
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
zoekt: repo_index_failed_speed_p75
75th percentile failed indexing durations over 5m (aggregate)
This dashboard shows the p75 duration of failed indexing jobs aggregated across all Zoekt instances.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100342
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name)(rate(index_repo_seconds_bucket{state="fail"}[5m])))
zoekt: repo_index_failed_speed_p99_per_instance
99th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p99 duration of failed indexing jobs broken out per Zoekt instance.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100350
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~`${instance:regex}`}[5m])))
zoekt: repo_index_failed_speed_p90_per_instance
90th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p90 duration of failed indexing jobs broken out per Zoekt instance.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100351
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~`${instance:regex}`}[5m])))
zoekt: repo_index_failed_speed_p75_per_instance
75th percentile failed indexing durations over 5m (per instance)
This dashboard shows the p75 duration of failed indexing jobs broken out per Zoekt instance.
Failures happening after a long time indicates timeouts.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100352
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, instance)(rate(index_repo_seconds_bucket{state="fail",instance=~`${instance:regex}`}[5m])))
Zoekt: Indexing queue statistics
zoekt: indexed_num_scheduled_jobs_aggregate
# scheduled index jobs (aggregate)
A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(index_queue_len)
zoekt: indexed_num_scheduled_jobs_per_instance
# scheduled index jobs (per instance)
A queue that is constantly growing could be a leading indicator of a bottleneck or under-provisioning
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLindex_queue_len{instance=~`${instance:regex}`}
zoekt: indexed_queueing_delay_heatmap
Job queuing delay heatmap
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of:
- resource saturation
- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better .
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le) (increase(index_queue_age_seconds_bucket[$__rate_interval]))
zoekt: indexed_queueing_delay_p99_9_aggregate
99.9th percentile job queuing delay over 5m (aggregate)
This dashboard shows the p99.9 job queueing delay aggregated across all Zoekt instances.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of:
- resource saturation
- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
The 99.9 percentile dashboard is useful for capturing the long tail of queueing delays (on the order of 24+ hours, etc.).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100420
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
zoekt: indexed_queueing_delay_p90_aggregate
90th percentile job queueing delay over 5m (aggregate)
This dashboard shows the p90 job queueing delay aggregated across all Zoekt instances.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of:
- resource saturation
- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100421
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
zoekt: indexed_queueing_delay_p75_aggregate
75th percentile job queueing delay over 5m (aggregate)
This dashboard shows the p75 job queueing delay aggregated across all Zoekt instances.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of:
- resource saturation
- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100422
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name)(rate(index_queue_age_seconds_bucket[5m])))
zoekt: indexed_queueing_delay_p99_9_per_instance
99.9th percentile job queuing delay over 5m (per instance)
This dashboard shows the p99.9 job queueing delay, broken out per Zoekt instance.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of:
- resource saturation
- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
The 99.9 percentile dashboard is useful for capturing the long tail of queueing delays (on the order of 24+ hours, etc.).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100430
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, instance)(rate(index_queue_age_seconds_bucket{instance=~`${instance:regex}`}[5m])))
zoekt: indexed_queueing_delay_p90_per_instance
90th percentile job queueing delay over 5m (per instance)
This dashboard shows the p90 job queueing delay, broken out per Zoekt instance.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of:
- resource saturation
- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100431
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, instance)(rate(index_queue_age_seconds_bucket{instance=~`${instance:regex}`}[5m])))
zoekt: indexed_queueing_delay_p75_per_instance
75th percentile job queueing delay over 5m (per instance)
This dashboard shows the p75 job queueing delay, broken out per Zoekt instance.
The queueing delay represents the amount of time an indexing job spent in the queue before it was processed.
Large queueing delays can be an indicator of:
- resource saturation
- each Zoekt replica has too many jobs for it to be able to process all of them promptly. In this scenario, consider adding additional Zoekt replicas to distribute the work better.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100432
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, instance)(rate(index_queue_age_seconds_bucket{instance=~`${instance:regex}`}[5m])))
Zoekt: Virtual Memory Statistics
zoekt: memory_map_areas_percentage_used
Process memory map areas percentage used (per instance)
Processes have a limited about of memory map areas that they can use. In Zoekt, memory map areas are mainly used for loading shards into memory for queries (via mmap). However, memory map areas are also used for loading shared libraries, etc.
See https://en.wikipedia.org/wiki/Memory-mapped_file and the related articles for more information about memory maps.
Once the memory map limit is reached, the Linux kernel will prevent the process from creating any additional memory map areas. This could cause the process to crash.
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELL(proc_metrics_memory_map_current_count{instance=~`${instance:regex}`} / proc_metrics_memory_map_max_limit{instance=~`${instance:regex}`}) * 100
Zoekt: Compound shards
zoekt: compound_shards_aggregate
# of compound shards (aggregate)
The total number of compound shards aggregated over all instances.
This number should be consistent if the number of indexed repositories doesn`t change.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(index_number_compound_shards) by (app)
zoekt: compound_shards_per_instance
# of compound shards (per instance)
The total number of compound shards per instance.
This number should be consistent if the number of indexed repositories doesn`t change.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(index_number_compound_shards{instance=~`${instance:regex}`}) by (instance)
zoekt: average_shard_merging_duration_success
Average successful shard merging duration over 1 hour
Average duration of a successful merge over the last hour.
The duration depends on the target compound shard size. The larger the compound shard the longer a merge will take. Since the target compound shard size is set on start of zoekt-indexserver, the average duration should be consistent.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(index_shard_merging_duration_seconds_sum{error="false"}[1h])) / sum(rate(index_shard_merging_duration_seconds_count{error="false"}[1h]))
zoekt: average_shard_merging_duration_error
Average failed shard merging duration over 1 hour
Average duration of a failed merge over the last hour.
This curve should be flat. Any deviation should be investigated.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(index_shard_merging_duration_seconds_sum{error="true"}[1h])) / sum(rate(index_shard_merging_duration_seconds_count{error="true"}[1h]))
zoekt: shard_merging_errors_aggregate
Number of errors during shard merging (aggregate)
Number of errors during shard merging aggregated over all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100620
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(index_shard_merging_duration_seconds_count{error="true"}) by (app)
zoekt: shard_merging_errors_per_instance
Number of errors during shard merging (per instance)
Number of errors during shard merging per instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100621
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(index_shard_merging_duration_seconds_count{instance=~`${instance:regex}`, error="true"}) by (instance)
zoekt: shard_merging_merge_running_per_instance
If shard merging is running (per instance)
Set to 1 if shard merging is running.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100630
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (instance) (index_shard_merging_running{instance=~`${instance:regex}`})
zoekt: shard_merging_vacuum_running_per_instance
If vacuum is running (per instance)
Set to 1 if vacuum is running.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100631
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (instance) (index_vacuum_running{instance=~`${instance:regex}`})
Zoekt: Network I/O pod metrics (only available on Kubernetes)
zoekt: network_sent_bytes_aggregate
Transmission rate over 5m (aggregate)
The rate of bytes sent over the network across all Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~`.*indexed-search.*`}[5m]))
zoekt: network_received_packets_per_instance
Transmission rate over 5m (per instance)
The amount of bytes sent over the network by individual Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_bytes_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
zoekt: network_received_bytes_aggregate
Receive rate over 5m (aggregate)
The amount of bytes received from the network across Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~`.*indexed-search.*`}[5m]))
zoekt: network_received_bytes_per_instance
Receive rate over 5m (per instance)
The amount of bytes received from the network by individual Zoekt pods
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_bytes_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
zoekt: network_transmitted_packets_dropped_by_instance
Transmit packet drop rate over 5m (by instance)
An increase in dropped packets could be a leading indicator of network saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100720
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_packets_dropped_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
zoekt: network_transmitted_packets_errors_per_instance
Errors encountered while transmitting over 5m (per instance)
An increase in transmission errors could indicate a networking issue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100721
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (container_label_io_kubernetes_pod_name) (rate(container_network_transmit_errors_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
zoekt: network_received_packets_dropped_by_instance
Receive packet drop rate over 5m (by instance)
An increase in dropped packets could be a leading indicator of network saturation.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100722
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_packets_dropped_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
zoekt: network_transmitted_packets_errors_by_instance
Errors encountered while receiving over 5m (per instance)
An increase in errors while receiving could indicate a networking issue.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100723
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (container_label_io_kubernetes_pod_name) (rate(container_network_receive_errors_total{container_label_io_kubernetes_pod_name=~`${instance:regex}`}[5m]))
Zoekt: Zoekt Webserver GRPC server metrics
zoekt: zoekt_webserver_grpc_request_rate_all_methods
Request rate across all methods over 2m
The number of gRPC requests received per second across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m]))
zoekt: zoekt_webserver_grpc_request_rate_per_method
Request rate per-method over 2m
The number of gRPC requests received per second broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_started_total{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method)
zoekt: zoekt_webserver_error_percentage_all_methods
Error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_code!="OK",instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m]))) / (sum(rate(grpc_server_handled_total{instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m]))) ))
zoekt: zoekt_webserver_grpc_error_percentage_per_method
Error percentage per-method over 2m
The percentage of gRPC requests that fail per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ( (sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_webserver_method:regex}`,grpc_code!="OK",instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method)) / (sum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method)) ))
zoekt: zoekt_webserver_p99_response_time_per_method
99th percentile response time per method over 2m
The 99th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100820
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_p90_response_time_per_method
90th percentile response time per method over 2m
The 90th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100821
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_p75_response_time_per_method
75th percentile response time per method over 2m
The 75th percentile response time per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100822
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_handling_seconds_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_p99_9_response_size_per_method
99.9th percentile total response size per method over 2m
The 99.9th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100830
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_p90_response_size_per_method
90th percentile total response size per method over 2m
The 90th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100831
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_p75_response_size_per_method
75th percentile total response size per method over 2m
The 75th percentile total per-RPC response size per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100832
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_sent_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_p99_9_invididual_sent_message_size_per_method
99.9th percentile individual sent message size per method over 2m
The 99.9th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100840
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.999, sum by (le, name, grpc_method)(rate(grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_p90_invididual_sent_message_size_per_method
90th percentile individual sent message size per method over 2m
The 90th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100841
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.90, sum by (le, name, grpc_method)(rate(grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_p75_invididual_sent_message_size_per_method
75th percentile individual sent message size per method over 2m
The 75th percentile size of every individual protocol buffer size sent by the service per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100842
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.75, sum by (le, name, grpc_method)(rate(grpc_server_sent_individual_message_size_bytes_per_rpc_bucket{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))
zoekt: zoekt_webserver_grpc_response_stream_message_count_per_method
Average streaming response message count per-method over 2m
The average number of response messages sent during a streaming RPC method, broken out per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100850
on your Sourcegraph instance.
Technical details
Query:
SHELL((sum(rate(grpc_server_msg_sent_total{grpc_type="server_stream",instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method))/(sum(rate(grpc_server_started_total{grpc_type="server_stream",instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method)))
zoekt: zoekt_webserver_grpc_all_codes_per_method
Response codes rate per-method over 2m
The rate of all generated gRPC response codes per method, aggregated across all instances.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100860
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(grpc_server_handled_total{grpc_method=~`${zoekt_webserver_method:regex}`,instance=~`${webserver_instance:regex}`,grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])) by (grpc_method, grpc_code)
Zoekt: Zoekt Webserver GRPC "internal error" metrics
zoekt: zoekt_webserver_grpc_clients_error_percentage_all_methods
Client baseline error percentage across all methods over 2m
The percentage of gRPC requests that fail across all methods (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_code!="OK"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))))))
zoekt: zoekt_webserver_grpc_clients_error_percentage_per_method
Client baseline error percentage per-method over 2m
The percentage of gRPC requests that fail per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}",grpc_code!="OK"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method))))))
zoekt: zoekt_webserver_grpc_clients_all_codes_per_method
Client baseline response codes rate per-method over 2m
The rate of all generated gRPC response codes per method (regardless of whether or not there was an internal error), aggregated across all "zoekt_webserver" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100902
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method, grpc_code))
zoekt: zoekt_webserver_grpc_clients_internal_error_percentage_all_methods
Client-observed gRPC internal error percentage across all methods over 2m
The percentage of gRPC requests that appear to fail due to gRPC internal errors across all methods, aggregated across all "zoekt_webserver" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100910
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_code!="OK",is_internal_error="true"}[2m])))) / ((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))))))
zoekt: zoekt_webserver_grpc_clients_internal_error_percentage_per_method
Client-observed gRPC internal error percentage per-method over 2m
The percentage of gRPC requests that appear to fail to due to gRPC internal errors per method, aggregated across all "zoekt_webserver" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100911
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}",grpc_code!="OK",is_internal_error="true"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method))))))
zoekt: zoekt_webserver_grpc_clients_internal_error_all_codes_per_method
Client-observed gRPC internal error response code rate per-method over 2m
The rate of gRPC internal-error response codes per method, aggregated across all "zoekt_webserver" clients.
Note: Internal errors are ones that appear to originate from the https://github.com/grpc/grpc-go library itself, rather than from any user-written application code. These errors can be caused by a variety of issues, and can originate from either the code-generated "zoekt_webserver" gRPC client or gRPC server. These errors might be solvable by adjusting the gRPC configuration, or they might indicate a bug from Sourcegraph`s use of gRPC.
When debugging, knowing that a particular error comes from the grpc-go library itself (an internal error
) as opposed to normal
application code can be helpful when trying to fix it.
Note: Internal errors are detected via a very coarse heuristic (seeing if the error starts with grpc:
, etc.). Because of this, it`s possible that some gRPC-specific issues might not be categorized as internal errors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=100912
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_method_status{grpc_service=~"zoekt.webserver.v1.WebserverService",is_internal_error="true",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method, grpc_code))
Zoekt: Zoekt Webserver GRPC retry metrics
zoekt: zoekt_webserver_grpc_clients_retry_percentage_across_all_methods
Client retry percentage across all methods over 2m
The percentage of gRPC requests that were retried across all methods, aggregated across all "zoekt_webserver" clients.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService",is_retried="true"}[2m])))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService"}[2m])))))))
zoekt: zoekt_webserver_grpc_clients_retry_percentage_per_method
Client retry percentage per-method over 2m
The percentage of gRPC requests that were retried aggregated across all "zoekt_webserver" clients, broken out per method.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELL(100.0 * ((((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService",is_retried="true",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method))) / ((sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}"}[2m])) by (grpc_method))))))
zoekt: zoekt_webserver_grpc_clients_retry_count_per_method
Client retry count per-method over 2m
The count of gRPC requests that were retried aggregated across all "zoekt_webserver" clients, broken out per method
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101002
on your Sourcegraph instance.
Technical details
Query:
SHELL(sum(rate(src_grpc_client_retry_attempts_total{grpc_service=~"zoekt.webserver.v1.WebserverService",grpc_method=~"${zoekt_webserver_method:regex}",is_retried="true"}[2m])) by (grpc_method))
Zoekt: Data disk I/O metrics
zoekt: data_disk_reads_sec
Read request rate over 1m (per instance)
The number of read requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))
zoekt: data_disk_writes_sec
Write request rate over 1m (per instance)
The number of write requests that were issued to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))
zoekt: data_disk_read_throughput
Read throughput over 1m (per instance)
The amount of data that was read from the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101110
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))
zoekt: data_disk_write_throughput
Write throughput over 1m (per instance)
The amount of data that was written to the device per second.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101111
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))
zoekt: data_disk_read_duration
Average read duration over 1m (per instance)
The average time for read requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101120
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
zoekt: data_disk_write_duration
Average write duration over 1m (per instance)
The average time for write requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101121
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_write_time_seconds_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
zoekt: data_disk_read_request_size
Average read request size over 1m (per instance)
The average size of read requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101130
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_read_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_completed_total{instance=~`node-exporter.*`}[1m])))))))
zoekt: data_disk_write_request_size)
Average write request size over 1m (per instance)
The average size of write requests that were issued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101131
on your Sourcegraph instance.
Technical details
Query:
SHELL(((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_written_bytes_total{instance=~`node-exporter.*`}[1m])))))) / ((max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_completed_total{instance=~`node-exporter.*`}[1m])))))))
zoekt: data_disk_reads_merged_sec
Merged read request rate over 1m (per instance)
The number of read requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101140
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_reads_merged_total{instance=~`node-exporter.*`}[1m])))))
zoekt: data_disk_writes_merged_sec
Merged writes request rate over 1m (per instance)
The number of write requests merged per second that were queued to the device.
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101141
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_writes_merged_total{instance=~`node-exporter.*`}[1m])))))
zoekt: data_disk_average_queue_size
Average queue size over 1m (per instance)
The number of I/O operations that were being queued or being serviced. See https://blog.actorsfit.com/a?ID=00200-428fa2ac-e338-4540-848c-af9a3eb1ebd2 for background (avgqu-sz).
Note: Disk statistics are per device, not per service. In certain environments (such as common docker-compose setups), zoekt could be one of many services using this disk. These statistics are best interpreted as the load experienced by the device zoekt is using, not the load zoekt is solely responsible for causing.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101150
on your Sourcegraph instance.
Technical details
Query:
SHELL(max by (instance) (zoekt_indexserver_mount_point_info{mount_name="indexDir",instance=~`${instance:regex}`} * on (device, nodename) group_left() (max by (device, nodename) (rate(node_disk_io_time_weighted_seconds_total{instance=~`node-exporter.*`}[1m])))))
Zoekt: [zoekt-indexserver] Container monitoring (not available on server)
zoekt: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod zoekt-indexserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p zoekt-indexserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' zoekt-indexserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the zoekt-indexserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs zoekt-indexserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^zoekt-indexserver.*"}) > 60)
zoekt: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101201
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}
zoekt: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101202
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}
zoekt: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^zoekt-indexserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^zoekt-indexserver.*"}[1h]))
Zoekt: [zoekt-webserver] Container monitoring (not available on server)
zoekt: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod zoekt-webserver
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p zoekt-webserver
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' zoekt-webserver
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the zoekt-webserver container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs zoekt-webserver
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101300
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^zoekt-webserver.*"}) > 60)
zoekt: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101301
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}
zoekt: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101302
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}
zoekt: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^zoekt-webserver.*"}[1h]) + rate(container_fs_writes_total{name=~"^zoekt-webserver.*"}[1h]))
Zoekt: [zoekt-indexserver] Provisioning indicators (not available on server)
zoekt: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101400
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}[1d])
zoekt: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101401
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}[1d])
zoekt: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101410
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-indexserver.*"}[5m])
zoekt: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101411
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-indexserver.*"}[5m])
zoekt: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101412
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^zoekt-indexserver.*"})
Zoekt: [zoekt-webserver] Provisioning indicators (not available on server)
zoekt: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101500
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}[1d])
zoekt: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101501
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}[1d])
zoekt: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101510
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^zoekt-webserver.*"}[5m])
zoekt: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101511
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^zoekt-webserver.*"}[5m])
zoekt: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101512
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^zoekt-webserver.*"})
Zoekt: Kubernetes monitoring (only available on Kubernetes)
zoekt: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/zoekt/zoekt?viewPanel=101600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*indexed-search"}) / count by (app) (up{app=~".*indexed-search"}) * 100
Prometheus
Sourcegraph's all-in-one Prometheus and Alertmanager service.
To see this dashboard, visit /-/debug/grafana/d/prometheus/prometheus
on your Sourcegraph instance.
Prometheus: Metrics
prometheus: metrics_cardinality
Metrics with highest cardinalities
The 10 highest-cardinality metrics collected by this Prometheus instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLtopk(10, count by (__name__, job)({__name__!=""}))
prometheus: samples_scraped
Samples scraped by job
The number of samples scraped after metric relabeling was applied by this Prometheus instance.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(job) (scrape_samples_post_metric_relabeling{job!=""})
prometheus: prometheus_rule_eval_duration
Average prometheus rule group evaluation duration over 10m by rule group
A high value here indicates Prometheus rule evaluation is taking longer than expected. It might indicate that certain rule groups are taking too long to evaluate, or Prometheus is underprovisioned.
Rules that Sourcegraph ships with are grouped under /sg_config_prometheus
. Custom rules are grouped under /sg_prometheus_addons
.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(rule_group) (avg_over_time(prometheus_rule_group_last_duration_seconds[10m]))
prometheus: prometheus_rule_eval_failures
Failed prometheus rule evaluations over 5m by rule group
Rules that Sourcegraph ships with are grouped under /sg_config_prometheus
. Custom rules are grouped under /sg_prometheus_addons
.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(rule_group) (rate(prometheus_rule_evaluation_failures_total[5m]))
Prometheus: Alerts
prometheus: alertmanager_notification_latency
Alertmanager notification latency over 1m by integration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(integration) (rate(alertmanager_notification_latency_seconds_sum[1m]))
prometheus: alertmanager_notification_failures
Failed alertmanager notifications over 1m by integration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(integration) (rate(alertmanager_notifications_failed_total[1m]))
Prometheus: Internals
prometheus: prometheus_config_status
Prometheus configuration reload status
A 1
indicates Prometheus reloaded its configuration successfully.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLprometheus_config_last_reload_successful
prometheus: alertmanager_config_status
Alertmanager configuration reload status
A 1
indicates Alertmanager reloaded its configuration successfully.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLalertmanager_config_last_reload_successful
prometheus: prometheus_tsdb_op_failure
Prometheus tsdb failures by operation over 1m by operation
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLincrease(label_replace({__name__=~"prometheus_tsdb_(.*)_failed_total"}, "operation", "$1", "__name__", "(.+)s_failed_total")[5m:1m])
prometheus: prometheus_target_sample_exceeded
Prometheus scrapes that exceed the sample limit over 10m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLincrease(prometheus_target_scrapes_exceeded_sample_limit_total[10m])
prometheus: prometheus_target_sample_duplicate
Prometheus scrapes rejected due to duplicate timestamps over 10m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLincrease(prometheus_target_scrapes_sample_duplicate_timestamp_total[10m])
Prometheus: Container monitoring (not available on server)
prometheus: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod prometheus
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p prometheus
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' prometheus
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the prometheus container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs prometheus
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^prometheus.*"}) > 60)
prometheus: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}
prometheus: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}
prometheus: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^prometheus.*"}[1h]) + rate(container_fs_writes_total{name=~"^prometheus.*"}[1h]))
Prometheus: Provisioning indicators (not available on server)
prometheus: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}[1d])
prometheus: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}[1d])
prometheus: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^prometheus.*"}[5m])
prometheus: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^prometheus.*"}[5m])
prometheus: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^prometheus.*"})
Prometheus: Kubernetes monitoring (only available on Kubernetes)
prometheus: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/prometheus/prometheus?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*prometheus"}) / count by (app) (up{app=~".*prometheus"}) * 100
Executor
Executes jobs in an isolated environment.
To see this dashboard, visit /-/debug/grafana/d/executor/executor
on your Sourcegraph instance.
Executor: Executor: Executor jobs
executor: executor_queue_size
Unprocessed executor job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (queue)(src_executor_total{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
executor: executor_queue_growth_rate
Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (queue)(increase(src_executor_total{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m])) / sum by (queue)(increase(src_executor_processor_total{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m]))
executor: executor_queued_max_age
Unprocessed executor job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (queue)(src_executor_queued_duration_seconds_total{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
Executor: Executor: Executor jobs
executor: multiqueue_executor_dequeue_cache_size
Unprocessed executor job dequeue cache size for multiqueue executors
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLmultiqueue_executor_dequeue_cache_size{queue=~"$queue",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}
Executor: Executor: Executor jobs
executor: executor_handlers
Executor active handlers
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_executor_processor_handlers{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"})
executor: executor_processor_total
Executor operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_executor_processor_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: executor_processor_99th_percentile_duration
Aggregate successful executor operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_executor_processor_duration_seconds_bucket{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: executor_processor_errors_total
Executor operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: executor_processor_error_rate
Executor operation error rate over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_executor_processor_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_executor_processor_errors_total{queue=~"${queue:regex}",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Queue API client
executor: apiworker_apiclient_queue_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_apiworker_apiclient_queue_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_apiclient_queue_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_apiclient_queue_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_apiclient_queue_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_queue_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_apiclient_queue_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_apiclient_queue_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Files API client
executor: apiworker_apiclient_files_total
Aggregate client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_99th_percentile_duration
Aggregate successful client operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_apiworker_apiclient_files_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_errors_total
Aggregate client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_error_rate
Aggregate client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_apiclient_files_total
Client operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_99th_percentile_duration
99th percentile successful client operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_apiclient_files_duration_seconds_bucket{sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_apiclient_files_errors_total
Client operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_apiclient_files_error_rate
Client operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_apiclient_files_total{sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_apiclient_files_errors_total{sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Job setup
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"setup.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Job execution
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100603
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100612
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100613
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"exec.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Job teardown
executor: apiworker_command_total
Aggregate command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
Aggregate successful command operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_apiworker_command_duration_seconds_bucket{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_errors_total
Aggregate command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100702
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Aggregate command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100703
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
executor: apiworker_command_total
Command operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_99th_percentile_duration
99th percentile successful command operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_apiworker_command_duration_seconds_bucket{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])))
executor: apiworker_command_errors_total
Command operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100712
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))
executor: apiworker_command_error_rate
Command operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100713
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) / (sum by (op)(increase(src_apiworker_command_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m])) + sum by (op)(increase(src_apiworker_command_errors_total{op=~"teardown.*",sg_job=~"^sourcegraph-executors.*"}[5m]))) * 100
Executor: Executor: Compute instance metrics
executor: node_cpu_utilization
CPU utilization (minus idle/iowait)
Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_cpu_seconds_total{sg_job=~"sourcegraph-executors",mode!~"(idle|iowait)",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) / count(node_cpu_seconds_total{sg_job=~"sourcegraph-executors",mode="system",sg_instance=~"$instance"}) by (sg_instance) * 100
executor: node_cpu_saturation_cpu_wait
CPU saturation (time waiting)
Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(node_pressure_cpu_waiting_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
executor: node_memory_utilization
Memory utilization
Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELL(1 - sum(node_memory_MemAvailable_bytes{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}) by (sg_instance) / sum(node_memory_MemTotal_bytes{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}) by (sg_instance)) * 100
executor: node_memory_saturation_vmeff
Memory saturation (vmem efficiency)
Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers >~100% may be sign of imminent memory exhaustion, while sustained 0% < x < ~100% figures are very serious.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELL(rate(node_vmstat_pgsteal_anon{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_direct{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_file{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgsteal_kswapd{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) / (rate(node_vmstat_pgscan_anon{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_direct{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_file{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]) + rate(node_vmstat_pgscan_kswapd{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) * 100
executor: node_memory_saturation_pressure_stalled
Memory saturation (fully stalled)
Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(node_pressure_memory_stalled_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
executor: node_io_disk_utilization
Disk IO utilization (percentage time spent in IO)
Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100820
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(label_replace(label_replace(rate(node_disk_io_time_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk) * 100
executor: node_io_disk_saturation
Disk IO saturation (avg IO queue size)
Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100821
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(label_replace(label_replace(rate(node_disk_io_time_weighted_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk)
executor: node_io_disk_saturation_pressure_full
Disk IO saturation (avg time of all processes stalled)
Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100822
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(node_pressure_io_stalled_seconds_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])
executor: node_io_network_utilization
Network IO utilization (Rx)
Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100830
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_receive_bytes_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) * 8
executor: node_io_network_saturation
Network IO saturation (Rx packets dropped)
Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100831
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_receive_drop_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_saturation
Network IO errors (Rx)
Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100832
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_receive_errs_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_utilization
Network IO utilization (Tx)
Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100840
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_transmit_bytes_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance) * 8
executor: node_io_network_saturation
Network IO saturation (Tx packets dropped)
Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100841
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_transmit_drop_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_saturation
Network IO errors (Tx)
Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100842
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_transmit_errs_total{sg_job=~"sourcegraph-executors",sg_instance=~"$instance"}[$__rate_interval])) by(sg_instance)
Executor: Executor: Docker Registry Mirror instance metrics
executor: node_cpu_utilization
CPU utilization (minus idle/iowait)
Indicates the amount of CPU time excluding idle and iowait time, divided by the number of cores, as a percentage.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_cpu_seconds_total{sg_job=~"sourcegraph-executors-registry",mode!~"(idle|iowait)",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) / count(node_cpu_seconds_total{sg_job=~"sourcegraph-executors-registry",mode="system",sg_instance=~"docker-registry"}) by (sg_instance) * 100
executor: node_cpu_saturation_cpu_wait
CPU saturation (time waiting)
Indicates the average summed time a number of (but strictly not all) non-idle processes spent waiting for CPU time. If this is higher than normal, then the CPU is underpowered for the workload and more powerful machines should be provisioned. This only represents a "less-than-all processes" time, because for processes to be waiting for CPU time there must be other process(es) consuming CPU time.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(node_pressure_cpu_waiting_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
executor: node_memory_utilization
Memory utilization
Indicates the amount of available memory (including cache and buffers) as a percentage. Consistently high numbers are generally fine so long memory saturation figures are within acceptable ranges, these figures may be more useful for informing executor provisioning decisions, such as increasing worker parallelism, down-sizing machines etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100910
on your Sourcegraph instance.
Technical details
Query:
SHELL(1 - sum(node_memory_MemAvailable_bytes{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}) by (sg_instance) / sum(node_memory_MemTotal_bytes{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}) by (sg_instance)) * 100
executor: node_memory_saturation_vmeff
Memory saturation (vmem efficiency)
Indicates the efficiency of page reclaim, calculated as pgsteal/pgscan. Optimal figures are short spikes of near 100% and above, indicating that a high ratio of scanned pages are actually being freed, or exactly 0%, indicating that pages arent being scanned as there is no memory pressure. Sustained numbers >~100% may be sign of imminent memory exhaustion, while sustained 0% < x < ~100% figures are very serious.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100911
on your Sourcegraph instance.
Technical details
Query:
SHELL(rate(node_vmstat_pgsteal_anon{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_direct{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_file{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgsteal_kswapd{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) / (rate(node_vmstat_pgscan_anon{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_direct{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_file{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]) + rate(node_vmstat_pgscan_kswapd{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) * 100
executor: node_memory_saturation_pressure_stalled
Memory saturation (fully stalled)
Indicates the amount of time all non-idle processes were stalled waiting on memory operations to complete. This is often correlated with vmem efficiency ratio when pressure on available memory is high. If they`re not correlated, this could indicate issues with the machine hardware and/or configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100912
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(node_pressure_memory_stalled_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
executor: node_io_disk_utilization
Disk IO utilization (percentage time spent in IO)
Indicates the percentage of time a disk was busy. If this is less than 100%, then the disk has spare utilization capacity. However, a value of 100% does not necesarily indicate the disk is at max capacity. For single, serial request-serving devices, 100% may indicate maximum saturation, but for SSDs and RAID arrays this is less likely to be the case, as they are capable of serving multiple requests in parallel, other metrics such as throughput and request queue size should be factored in.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100920
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(label_replace(label_replace(rate(node_disk_io_time_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk) * 100
executor: node_io_disk_saturation
Disk IO saturation (avg IO queue size)
Indicates the number of outstanding/queued IO requests. High but short-lived queue sizes may not present an issue, but if theyre consistently/often high and/or monotonically increasing, the disk may be failing or simply too slow for the amount of activity required. Consider replacing the drive(s) with SSDs if they are not already and/or replacing the faulty drive(s), if any.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100921
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(label_replace(label_replace(rate(node_disk_io_time_weighted_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval]), "disk", "$1", "device", "^([^d].+)"), "disk", "ignite", "device", "dm-.*")) by(sg_instance,disk)
executor: node_io_disk_saturation_pressure_full
Disk IO saturation (avg time of all processes stalled)
Indicates the averaged amount of time for which all non-idle processes were stalled waiting for IO to complete simultaneously aka where no processes could make progress.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100922
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(node_pressure_io_stalled_seconds_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])
executor: node_io_network_utilization
Network IO utilization (Rx)
Indicates the average summed receiving throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100930
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_receive_bytes_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) * 8
executor: node_io_network_saturation
Network IO saturation (Rx packets dropped)
Number of dropped received packets. This can happen if the receive queues/buffers become full due to slow packet processing throughput. The queues/buffers could be configured to be larger as a stop-gap but the processing application should be investigated as soon as possible. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=not%20otherwise%20counted.-,rx_dropped,-Number%20of%20packets
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100931
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_receive_drop_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_saturation
Network IO errors (Rx)
Number of bad/malformed packets received. https://www.kernel.org/doc/html/latest/networking/statistics.html#:~:text=excluding%20the%20FCS.-,rx_errors,-Total%20number%20of
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100932
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_receive_errs_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_utilization
Network IO utilization (Tx)
Indicates the average summed transmitted throughput of all network interfaces. This is often predominantly composed of the WAN/internet-connected interface, and knowing normal/good figures depends on knowing the bandwidth of the underlying hardware and the workloads.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100940
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_transmit_bytes_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance) * 8
executor: node_io_network_saturation
Network IO saturation (Tx packets dropped)
Number of dropped transmitted packets. This can happen if the receiving side`s receive queues/buffers become full due to slow packet processing throughput, the network link is congested etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100941
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_transmit_drop_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
executor: node_io_network_saturation
Network IO errors (Tx)
Number of packet transmission errors. This is distinct from tx packet dropping, and can indicate a failing NIC, improperly configured network options anywhere along the line, signal noise etc.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=100942
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(rate(node_network_transmit_errs_total{sg_job=~"sourcegraph-executors-registry",sg_instance=~"docker-registry"}[$__rate_interval])) by(sg_instance)
Executor: Golang runtime monitoring
executor: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(sg_instance) (go_goroutines{sg_job=~".*sourcegraph-executors"})
executor: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/executor/executor?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(sg_instance) (go_gc_duration_seconds{sg_job=~".*sourcegraph-executors"})
Global Containers Resource Usage
Container usage and provisioning indicators of all services.
To see this dashboard, visit /-/debug/grafana/d/containers/containers
on your Sourcegraph instance.
Global Containers Resource Usage: Containers (not available on server)
containers: container_memory_usage
Container memory usage of all services
This value indicates the memory usage of all containers.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}
containers: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
This value indicates the CPU usage of all containers.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}
Global Containers Resource Usage: Containers: Provisioning Indicators (not available on server)
containers: container_memory_usage_provisioning
Container memory usage (5m maximum) of services that exceed 80% memory limit
Containers that exceed 80% memory limit. The value indicates potential underprovisioned resources.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}[5m]) >= 80
containers: container_cpu_usage_provisioning
Container cpu usage total (5m maximum) across all cores of services that exceed 80% cpu limit
Containers that exceed 80% CPU limit. The value indicates potential underprovisioned resources.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}[5m]) >= 80
containers: container_oomkill_events_total
Container OOMKILL events total
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100120
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}) >= 1
containers: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/containers/containers?viewPanel=100130
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^(frontend|sourcegraph-frontend|gitserver|pgsql|codeintel-db|codeinsights|precise-code-intel-worker|prometheus|redis-cache|redis-store|redis-exporter|repo-updater|searcher|symbols|syntect-server|worker|zoekt-indexserver|zoekt-webserver|indexed-search|grafana|blobstore|jaeger).*"}) > 60)
Code Intelligence > Autoindexing
The service at internal/codeintel/autoindexing
.
To see this dashboard, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing
on your Sourcegraph instance.
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Summary
####codeintel-autoindexing:
Auto-index jobs inserted over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_dbstore_indexes_inserted[5m]))
codeintel-autoindexing: codeintel_autoindexing_error_rate
Auto-indexing job scheduler operation error rate over 10m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m])) / (sum(increase(src_codeintel_autoindexing_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m])) + sum(increase(src_codeintel_autoindexing_errors_total{op='HandleIndexSchedule',job=~"^${source:regex}.*"}[10m]))) * 100
codeintel-autoindexing: executor_queue_size
Unprocessed executor job queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (queue)(src_executor_total{queue=~"codeintel",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
codeintel-autoindexing: executor_queue_growth_rate
Unprocessed executor job queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs for the selected queue.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (queue)(increase(src_executor_total{queue=~"codeintel",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m])) / sum by (queue)(increase(src_executor_processor_total{queue=~"codeintel",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"}[30m]))
codeintel-autoindexing: executor_queued_max_age
Unprocessed executor job queue longest time in queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (queue)(src_executor_queued_duration_seconds_total{queue=~"codeintel",job=~"^(executor|sourcegraph-code-intel-indexers|executor-batches|frontend|sourcegraph-frontend|worker|sourcegraph-executors).*"})
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Service
codeintel-autoindexing: codeintel_autoindexing_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_autoindexing_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > GQL transport
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_total
Aggregate resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_99th_percentile_duration
Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_autoindexing_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_errors_total
Aggregate resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_error_rate
Aggregate resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_total
Resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_99th_percentile_duration
99th percentile successful resolver operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_errors_total
Resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_transport_graphql_error_rate
Resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Store (internal)
codeintel-autoindexing: codeintel_autoindexing_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_autoindexing_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Background jobs (internal)
codeintel-autoindexing: codeintel_autoindexing_background_total
Aggregate background operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_99th_percentile_duration
Aggregate successful background operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_autoindexing_background_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_errors_total
Aggregate background operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_error_rate
Aggregate background operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_background_total
Background operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_99th_percentile_duration
99th percentile successful background operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_background_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_background_errors_total
Background operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_background_error_rate
Background operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_background_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_background_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Inference service (internal)
codeintel-autoindexing: codeintel_autoindexing_inference_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_autoindexing_inference_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: codeintel_autoindexing_inference_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_inference_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_inference_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_inference_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_inference_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_inference_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Luasandbox service
codeintel-autoindexing: luasandbox_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_luasandbox_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100602
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100603
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-autoindexing: luasandbox_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_luasandbox_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: luasandbox_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100612
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: luasandbox_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100613
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_luasandbox_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_luasandbox_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Janitor task > Codeintel autoindexing janitor unknown repository
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_janitor_unknown_repository_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_janitor_unknown_repository_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_janitor_unknown_repository_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100712
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_repository_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100713
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Janitor task > Codeintel autoindexing janitor unknown commit
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_janitor_unknown_commit_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_janitor_unknown_commit_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_janitor_unknown_commit_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_unknown_commit_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100813
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Autoindexing: Codeintel: Autoindexing > Janitor task > Codeintel autoindexing janitor expired
codeintel-autoindexing: codeintel_autoindexing_janitor_expired_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_janitor_expired_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_expired_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_autoindexing_janitor_expired_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_expired_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100910
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_expired_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_expired_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100911
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_autoindexing_janitor_expired_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-autoindexing: codeintel_autoindexing_janitor_expired_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100912
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_expired_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-autoindexing: codeintel_autoindexing_janitor_expired_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-autoindexing/codeintel-autoindexing?viewPanel=100913
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_autoindexing_janitor_expired_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_autoindexing_janitor_expired_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_autoindexing_janitor_expired_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Code Nav
The service at internal/codeintel/codenav`.
To see this dashboard, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav
on your Sourcegraph instance.
Code Intelligence > Code Nav: Codeintel: CodeNav > Service
codeintel-codenav: codeintel_codenav_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_codenav_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-codenav: codeintel_codenav_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_codenav_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-codenav: codeintel_codenav_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Code Nav: Codeintel: CodeNav > LSIF store
codeintel-codenav: codeintel_codenav_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_codenav_lsifstore_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-codenav: codeintel_codenav_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_codenav_lsifstore_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-codenav: codeintel_codenav_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_lsifstore_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Code Nav: Codeintel: CodeNav > GQL Transport
codeintel-codenav: codeintel_codenav_transport_graphql_total
Aggregate resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_99th_percentile_duration
Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_codenav_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_errors_total
Aggregate resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_error_rate
Aggregate resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-codenav: codeintel_codenav_transport_graphql_total
Resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_99th_percentile_duration
99th percentile successful resolver operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_codenav_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-codenav: codeintel_codenav_transport_graphql_errors_total
Resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_transport_graphql_error_rate
Resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Code Nav: Codeintel: CodeNav > Store
codeintel-codenav: codeintel_codenav_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_codenav_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-codenav: codeintel_codenav_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_codenav_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-codenav: codeintel_codenav_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-codenav: codeintel_codenav_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-codenav/codeintel-codenav?viewPanel=100313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_codenav_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_codenav_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Policies
The service at internal/codeintel/policies
.
To see this dashboard, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies
on your Sourcegraph instance.
Code Intelligence > Policies: Codeintel: Policies > Service
codeintel-policies: codeintel_policies_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_policies_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-policies: codeintel_policies_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_policies_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-policies: codeintel_policies_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Policies: Codeintel: Policies > Store
codeintel-policies: codeintel_policies_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_policies_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-policies: codeintel_policies_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_policies_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-policies: codeintel_policies_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Policies: Codeintel: Policies > GQL Transport
codeintel-policies: codeintel_policies_transport_graphql_total
Aggregate resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_99th_percentile_duration
Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_policies_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_errors_total
Aggregate resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_error_rate
Aggregate resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-policies: codeintel_policies_transport_graphql_total
Resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_99th_percentile_duration
99th percentile successful resolver operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_policies_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-policies: codeintel_policies_transport_graphql_errors_total
Resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-policies: codeintel_policies_transport_graphql_error_rate
Resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_policies_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_policies_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Policies: Codeintel: Policies > Repository Pattern Matcher task
codeintel-policies: codeintel_background_policies_updated_total_total
Lsif repository pattern matcher repositories pattern matcher every 5m
Number of configuration policies whose repository membership list was updated
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-policies/codeintel-policies?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_policies_updated_total_total{job=~"^${source:regex}.*"}[5m]))
Code Intelligence > Ranking
The service at internal/codeintel/ranking
.
To see this dashboard, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking
on your Sourcegraph instance.
Code Intelligence > Ranking: Codeintel: Ranking > Service
codeintel-ranking: codeintel_ranking_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_ranking_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_ranking_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-ranking: codeintel_ranking_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Ranking > Store
codeintel-ranking: codeintel_ranking_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_ranking_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_ranking_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-ranking: codeintel_ranking_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Ranking > LSIFStore
codeintel-ranking: codeintel_ranking_lsifstore_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_lsifstore_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_lsifstore_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_ranking_lsifstore_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_lsifstore_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_lsifstore_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_ranking_lsifstore_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_ranking_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-ranking: codeintel_ranking_lsifstore_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_lsifstore_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_lsifstore_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_lsifstore_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_lsifstore_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_lsifstore_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_lsifstore_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_lsifstore_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Pipeline task > Codeintel ranking symbol exporter
codeintel-ranking: codeintel_ranking_symbol_exporter_records_processed_total
Records processed every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_symbol_exporter_records_processed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_symbol_exporter_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_symbol_exporter_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_symbol_exporter_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_symbol_exporter_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_symbol_exporter_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_symbol_exporter_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_symbol_exporter_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_symbol_exporter_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_symbol_exporter_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_symbol_exporter_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_symbol_exporter_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_symbol_exporter_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Pipeline task > Codeintel ranking file reference count seed mapper
codeintel-ranking: codeintel_ranking_file_reference_count_seed_mapper_records_processed_total
Records processed every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_file_reference_count_seed_mapper_records_processed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_seed_mapper_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_file_reference_count_seed_mapper_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_seed_mapper_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_seed_mapper_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_seed_mapper_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_file_reference_count_seed_mapper_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_file_reference_count_seed_mapper_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_seed_mapper_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_seed_mapper_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_seed_mapper_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_file_reference_count_seed_mapper_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_file_reference_count_seed_mapper_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Pipeline task > Codeintel ranking file reference count mapper
codeintel-ranking: codeintel_ranking_file_reference_count_mapper_records_processed_total
Records processed every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_file_reference_count_mapper_records_processed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_mapper_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_file_reference_count_mapper_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_mapper_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_mapper_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_mapper_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_file_reference_count_mapper_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_file_reference_count_mapper_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_mapper_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_mapper_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_mapper_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_file_reference_count_mapper_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_file_reference_count_mapper_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Pipeline task > Codeintel ranking file reference count reducer
codeintel-ranking: codeintel_ranking_file_reference_count_reducer_records_processed_total
Records processed every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_file_reference_count_reducer_records_processed_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_reducer_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_file_reference_count_reducer_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_reducer_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_reducer_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_reducer_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_file_reference_count_reducer_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_file_reference_count_reducer_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100612
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_reducer_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_file_reference_count_reducer_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100613
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_file_reference_count_reducer_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_file_reference_count_reducer_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_file_reference_count_reducer_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Janitor task > Codeintel ranking processed references janitor
codeintel-ranking: codeintel_ranking_processed_references_janitor_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_processed_references_janitor_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_processed_references_janitor_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_processed_references_janitor_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_processed_references_janitor_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_processed_references_janitor_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_processed_references_janitor_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_processed_references_janitor_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_processed_references_janitor_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100712
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_processed_references_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_processed_references_janitor_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100713
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_processed_references_janitor_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_processed_references_janitor_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_processed_references_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Janitor task > Codeintel ranking processed paths janitor
codeintel-ranking: codeintel_ranking_processed_paths_janitor_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_processed_paths_janitor_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_processed_paths_janitor_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_processed_paths_janitor_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_processed_paths_janitor_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_processed_paths_janitor_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_processed_paths_janitor_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_processed_paths_janitor_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_processed_paths_janitor_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_processed_paths_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_processed_paths_janitor_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100813
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_processed_paths_janitor_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_processed_paths_janitor_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_processed_paths_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Janitor task > Codeintel ranking exported uploads janitor
codeintel-ranking: codeintel_ranking_exported_uploads_janitor_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_exported_uploads_janitor_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_exported_uploads_janitor_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_exported_uploads_janitor_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_exported_uploads_janitor_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100910
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_exported_uploads_janitor_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_exported_uploads_janitor_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100911
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_exported_uploads_janitor_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_exported_uploads_janitor_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100912
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_exported_uploads_janitor_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=100913
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_exported_uploads_janitor_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Janitor task > Codeintel ranking deleted exported uploads janitor
codeintel-ranking: codeintel_ranking_deleted_exported_uploads_janitor_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_deleted_exported_uploads_janitor_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_deleted_exported_uploads_janitor_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_deleted_exported_uploads_janitor_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_deleted_exported_uploads_janitor_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_deleted_exported_uploads_janitor_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_deleted_exported_uploads_janitor_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_deleted_exported_uploads_janitor_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_deleted_exported_uploads_janitor_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_deleted_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_deleted_exported_uploads_janitor_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_deleted_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_deleted_exported_uploads_janitor_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_deleted_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Janitor task > Codeintel ranking abandoned exported uploads janitor
codeintel-ranking: codeintel_ranking_abandoned_exported_uploads_janitor_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_abandoned_exported_uploads_janitor_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_abandoned_exported_uploads_janitor_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_abandoned_exported_uploads_janitor_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_abandoned_exported_uploads_janitor_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_abandoned_exported_uploads_janitor_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_abandoned_exported_uploads_janitor_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_abandoned_exported_uploads_janitor_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_abandoned_exported_uploads_janitor_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_abandoned_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_abandoned_exported_uploads_janitor_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_abandoned_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_abandoned_exported_uploads_janitor_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_abandoned_exported_uploads_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Janitor task > Codeintel ranking rank counts janitor
codeintel-ranking: codeintel_ranking_rank_counts_janitor_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_rank_counts_janitor_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_rank_counts_janitor_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_rank_counts_janitor_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_rank_counts_janitor_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_rank_counts_janitor_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_rank_counts_janitor_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_rank_counts_janitor_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_rank_counts_janitor_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_rank_counts_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_rank_counts_janitor_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_rank_counts_janitor_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_rank_counts_janitor_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_rank_counts_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Ranking: Codeintel: Uploads > Janitor task > Codeintel ranking rank janitor
codeintel-ranking: codeintel_ranking_rank_janitor_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_rank_janitor_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_rank_janitor_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_ranking_rank_janitor_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_rank_janitor_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_rank_janitor_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_rank_janitor_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_ranking_rank_janitor_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-ranking: codeintel_ranking_rank_janitor_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_rank_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-ranking: codeintel_ranking_rank_janitor_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-ranking/codeintel-ranking?viewPanel=101313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_ranking_rank_janitor_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_ranking_rank_janitor_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_ranking_rank_janitor_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads
The service at internal/codeintel/uploads
.
To see this dashboard, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads
on your Sourcegraph instance.
Code Intelligence > Uploads: Codeintel: Uploads > Service
codeintel-uploads: codeintel_uploads_total
Aggregate service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_99th_percentile_duration
Aggregate successful service operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_errors_total
Aggregate service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100002
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_error_rate
Aggregate service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100003
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_total
Service operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_99th_percentile_duration
99th percentile successful service operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_errors_total
Service operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_error_rate
Service operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Store (internal)
codeintel-uploads: codeintel_uploads_store_total
Aggregate store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_99th_percentile_duration
Aggregate successful store operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_errors_total
Aggregate store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_error_rate
Aggregate store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_store_total
Store operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_99th_percentile_duration
99th percentile successful store operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_store_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_store_errors_total
Store operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_store_error_rate
Store operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_store_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_store_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > GQL Transport
codeintel-uploads: codeintel_uploads_transport_graphql_total
Aggregate resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_99th_percentile_duration
Aggregate successful resolver operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_errors_total
Aggregate resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_error_rate
Aggregate resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_transport_graphql_total
Resolver operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_99th_percentile_duration
99th percentile successful resolver operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_transport_graphql_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_transport_graphql_errors_total
Resolver operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_graphql_error_rate
Resolver operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_transport_graphql_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_transport_graphql_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > HTTP Transport
codeintel-uploads: codeintel_uploads_transport_http_total
Aggregate http handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_99th_percentile_duration
Aggregate successful http handler operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_codeintel_uploads_transport_http_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_errors_total
Aggregate http handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_error_rate
Aggregate http handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m])) + sum(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
codeintel-uploads: codeintel_uploads_transport_http_total
Http handler operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_99th_percentile_duration
99th percentile successful http handler operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_transport_http_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_transport_http_errors_total
Http handler operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_transport_http_error_rate
Http handler operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_transport_http_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_transport_http_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Repository with stale commit graph
codeintel-uploads: codeintel_commit_graph_queue_size
Repository queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_commit_graph_total{job=~"^${source:regex}.*"})
codeintel-uploads: codeintel_commit_graph_queue_growth_rate
Repository queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_commit_graph_total{job=~"^${source:regex}.*"}[30m])) / sum(increase(src_codeintel_commit_graph_processor_total{job=~"^${source:regex}.*"}[30m]))
codeintel-uploads: codeintel_commit_graph_queued_max_age
Repository queue longest time in queue
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_codeintel_commit_graph_queued_duration_seconds_total{job=~"^${source:regex}.*"})
Code Intelligence > Uploads: Codeintel: Uploads > Expiration task
codeintel-uploads: codeintel_background_repositories_scanned_total
Lsif upload repository scan repositories scanned every 5m
Number of repositories scanned for data retention
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_repositories_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_upload_records_scanned_total
Lsif upload records scan records scanned every 5m
Number of codeintel upload records scanned for data retention
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_upload_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_commits_scanned_total
Lsif upload commits scanned commits scanned every 5m
Number of commits reachable from a codeintel upload record scanned for data retention
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_commits_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_background_upload_records_expired_total
Lsif upload records expired uploads scanned every 5m
Number of codeintel upload records marked as expired
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_background_upload_records_expired_total{job=~"^${source:regex}.*"}[5m]))
Code Intelligence > Uploads: Codeintel: Uploads > Janitor task > Codeintel uploads janitor unknown repository
codeintel-uploads: codeintel_uploads_janitor_unknown_repository_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_unknown_repository_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_unknown_repository_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_unknown_repository_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_unknown_repository_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100610
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_unknown_repository_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100611
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_janitor_unknown_repository_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_janitor_unknown_repository_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100612
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_unknown_repository_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100613
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_unknown_repository_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Janitor task > Codeintel uploads janitor unknown commit
codeintel-uploads: codeintel_uploads_janitor_unknown_commit_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100700
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_unknown_commit_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_unknown_commit_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100701
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_unknown_commit_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_unknown_commit_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100710
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_unknown_commit_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100711
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_janitor_unknown_commit_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_janitor_unknown_commit_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100712
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_unknown_commit_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100713
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_unknown_commit_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Janitor task > Codeintel uploads janitor abandoned
codeintel-uploads: codeintel_uploads_janitor_abandoned_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100800
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_abandoned_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_abandoned_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100801
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_abandoned_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_abandoned_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100810
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_abandoned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_abandoned_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100811
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_janitor_abandoned_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_janitor_abandoned_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100812
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_abandoned_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_abandoned_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100813
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_abandoned_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_abandoned_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_abandoned_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Janitor task > Codeintel uploads expirer unreferenced
codeintel-uploads: codeintel_uploads_expirer_unreferenced_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100900
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_expirer_unreferenced_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100901
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_expirer_unreferenced_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100910
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100911
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_expirer_unreferenced_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100912
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=100913
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Janitor task > Codeintel uploads expirer unreferenced graph
codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_expirer_unreferenced_graph_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_expirer_unreferenced_graph_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101010
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101011
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_expirer_unreferenced_graph_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101012
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_expirer_unreferenced_graph_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101013
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_expirer_unreferenced_graph_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Janitor task > Codeintel uploads hard deleter
codeintel-uploads: codeintel_uploads_hard_deleter_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_hard_deleter_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_hard_deleter_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_hard_deleter_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_hard_deleter_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_hard_deleter_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_hard_deleter_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101111
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_hard_deleter_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_hard_deleter_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101112
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_hard_deleter_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_hard_deleter_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101113
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_hard_deleter_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_hard_deleter_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_hard_deleter_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Janitor task > Codeintel uploads janitor audit logs
codeintel-uploads: codeintel_uploads_janitor_audit_logs_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_audit_logs_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_audit_logs_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_audit_logs_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_audit_logs_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101210
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_audit_logs_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101211
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_janitor_audit_logs_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_janitor_audit_logs_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101212
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_audit_logs_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101213
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_audit_logs_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Janitor task > Codeintel uploads janitor scip documents
codeintel-uploads: codeintel_uploads_janitor_scip_documents_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_scip_documents_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_scip_documents_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_janitor_scip_documents_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_scip_documents_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101310
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_scip_documents_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101311
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_janitor_scip_documents_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_janitor_scip_documents_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101312
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_janitor_scip_documents_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101313
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_janitor_scip_documents_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Reconciler task > Codeintel uploads reconciler scip metadata
codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_reconciler_scip_metadata_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_reconciler_scip_metadata_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_reconciler_scip_metadata_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_reconciler_scip_metadata_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_reconciler_scip_metadata_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Code Intelligence > Uploads: Codeintel: Uploads > Reconciler task > Codeintel uploads reconciler scip data
codeintel-uploads: codeintel_uploads_reconciler_scip_data_records_scanned_total
Records scanned every 5m
The number of candidate records considered for cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_reconciler_scip_data_records_scanned_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_reconciler_scip_data_records_altered_total
Records altered every 5m
The number of candidate records altered as part of cleanup.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_codeintel_uploads_reconciler_scip_data_records_altered_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_reconciler_scip_data_total
Job invocation operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101510
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_reconciler_scip_data_99th_percentile_duration
99th percentile successful job invocation operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101511
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_codeintel_uploads_reconciler_scip_data_duration_seconds_bucket{job=~"^${source:regex}.*"}[5m])))
codeintel-uploads: codeintel_uploads_reconciler_scip_data_errors_total
Job invocation operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101512
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_errors_total{job=~"^${source:regex}.*"}[5m]))
codeintel-uploads: codeintel_uploads_reconciler_scip_data_error_rate
Job invocation operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/codeintel-uploads/codeintel-uploads?viewPanel=101513
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_errors_total{job=~"^${source:regex}.*"}[5m])) / (sum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_total{job=~"^${source:regex}.*"}[5m])) + sum by (op)(increase(src_codeintel_uploads_reconciler_scip_data_errors_total{job=~"^${source:regex}.*"}[5m]))) * 100
Telemetry
Monitoring telemetry services in Sourcegraph.
To see this dashboard, visit /-/debug/grafana/d/telemetry/telemetry
on your Sourcegraph instance.
Telemetry: Telemetry Gateway Exporter: Export and queue metrics
telemetry: telemetry_gateway_exporter_queue_size
Telemetry event payloads pending export
The number of events queued to be exported.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(src_telemetrygatewayexporter_queue_size)
telemetry: telemetry_gateway_exporter_queue_growth
Rate of growth of export queue over 30m
A positive value indicates the queue is growing.
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(deriv(src_telemetrygatewayexporter_queue_size[30m]))
telemetry: src_telemetrygatewayexporter_exported_events
Events exported from queue per hour
The number of events being exported.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100010
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(increase(src_telemetrygatewayexporter_exported_events[1h]))
telemetry: telemetry_gateway_exporter_batch_size
Number of events exported per batch over 30m
The number of events exported in each batch. The largest bucket is the maximum number of events exported per batch.
If the distribution trends to the maximum bucket, then events export throughput is at or approaching saturation - try increasing TELEMETRY_GATEWAY_EXPORTER_EXPORT_BATCH_SIZE
or decreasing TELEMETRY_GATEWAY_EXPORTER_EXPORT_INTERVAL
.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100011
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le) (rate(src_telemetrygatewayexporter_batch_size_bucket[30m]))
Telemetry: Telemetry Gateway Exporter: Export job operations
telemetry: telemetrygatewayexporter_exporter_total
Events exporter operations every 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_exporter_total{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_exporter_99th_percentile_duration
Aggregate successful events exporter operation duration distribution over 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_telemetrygatewayexporter_exporter_duration_seconds_bucket{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_exporter_errors_total
Events exporter operation errors every 30m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100102
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_exporter_errors_total{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_exporter_error_rate
Events exporter operation error rate over 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100103
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_exporter_errors_total{job=~"^worker.*"}[30m])) / (sum(increase(src_telemetrygatewayexporter_exporter_total{job=~"^worker.*"}[30m])) + sum(increase(src_telemetrygatewayexporter_exporter_errors_total{job=~"^worker.*"}[30m]))) * 100
Telemetry: Telemetry Gateway Exporter: Export queue cleanup job operations
telemetry: telemetrygatewayexporter_queue_cleanup_total
Export queue cleanup operations every 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_queue_cleanup_total{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_queue_cleanup_99th_percentile_duration
Aggregate successful export queue cleanup operation duration distribution over 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_telemetrygatewayexporter_queue_cleanup_duration_seconds_bucket{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_queue_cleanup_errors_total
Export queue cleanup operation errors every 30m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_queue_cleanup_errors_total{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_queue_cleanup_error_rate
Export queue cleanup operation error rate over 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_queue_cleanup_errors_total{job=~"^worker.*"}[30m])) / (sum(increase(src_telemetrygatewayexporter_queue_cleanup_total{job=~"^worker.*"}[30m])) + sum(increase(src_telemetrygatewayexporter_queue_cleanup_errors_total{job=~"^worker.*"}[30m]))) * 100
Telemetry: Telemetry Gateway Exporter: Export queue metrics reporting job operations
telemetry: telemetrygatewayexporter_queue_metrics_reporter_total
Export backlog metrics reporting operations every 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_total{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_queue_metrics_reporter_99th_percentile_duration
Aggregate successful export backlog metrics reporting operation duration distribution over 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_telemetrygatewayexporter_queue_metrics_reporter_duration_seconds_bucket{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_queue_metrics_reporter_errors_total
Export backlog metrics reporting operation errors every 30m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100302
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_errors_total{job=~"^worker.*"}[30m]))
telemetry: telemetrygatewayexporter_queue_metrics_reporter_error_rate
Export backlog metrics reporting operation error rate over 30m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100303
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_errors_total{job=~"^worker.*"}[30m])) / (sum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_total{job=~"^worker.*"}[30m])) + sum(increase(src_telemetrygatewayexporter_queue_metrics_reporter_errors_total{job=~"^worker.*"}[30m]))) * 100
Telemetry: Usage data exporter (legacy): Job operations
telemetry: telemetry_job_total
Aggregate usage data exporter operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetry_job_total{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_99th_percentile_duration
Aggregate successful usage data exporter operation duration distribution over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (le)(rate(src_telemetry_job_duration_seconds_bucket{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_errors_total
Aggregate usage data exporter operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_error_rate
Aggregate usage data exporter operation error rate over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100403
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m])) / (sum(increase(src_telemetry_job_total{job=~"^worker.*"}[5m])) + sum(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m]))) * 100
telemetry: telemetry_job_total
Usage data exporter operations every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100410
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_telemetry_job_total{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_99th_percentile_duration
99th percentile successful usage data exporter operation duration over 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100411
on your Sourcegraph instance.
Technical details
Query:
SHELLhistogram_quantile(0.99, sum by (le,op)(rate(src_telemetry_job_duration_seconds_bucket{job=~"^worker.*"}[5m])))
telemetry: telemetry_job_errors_total
Usage data exporter operation errors every 5m
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100412
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m]))
telemetry: telemetry_job_error_rate
Usage data exporter operation error rate over 5m
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100413
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (op)(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m])) / (sum by (op)(increase(src_telemetry_job_total{job=~"^worker.*"}[5m])) + sum by (op)(increase(src_telemetry_job_errors_total{job=~"^worker.*"}[5m]))) * 100
Telemetry: Usage data exporter (legacy): Queue size
telemetry: telemetry_job_queue_size_queue_size
Event level usage data queue size
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(src_telemetry_job_queue_size_total{job=~"^worker.*"})
telemetry: telemetry_job_queue_size_queue_growth_rate
Event level usage data queue growth rate over 30m
This value compares the rate of enqueues against the rate of finished jobs.
- A value < than 1 indicates that process rate > enqueue rate
- A value = than 1 indicates that process rate = enqueue rate
- A value > than 1 indicates that process rate < enqueue rate
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLsum(increase(src_telemetry_job_queue_size_total{job=~"^worker.*"}[30m])) / sum(increase(src_telemetry_job_queue_size_processor_total{job=~"^worker.*"}[30m]))
Telemetry: Usage data exporter (legacy): Utilization
telemetry: telemetry_job_utilized_throughput
Utilized percentage of maximum throughput
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/telemetry/telemetry?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_telemetry_job_total{op="SendEvents"}[1h]) / on() group_right() src_telemetry_job_max_throughput * 100
OpenTelemetry Collector
The OpenTelemetry collector ingests OpenTelemetry data from Sourcegraph and exports it to the configured backends.
To see this dashboard, visit /-/debug/grafana/d/otel-collector/otel-collector
on your Sourcegraph instance.
OpenTelemetry Collector: Receivers
otel-collector: otel_span_receive_rate
Spans received per receiver per minute
Shows the rate of spans accepted by the configured reveiver
A Trace is a collection of spans and a span represents a unit of work or operation. Spans are the building blocks of Traces. The spans have only been accepted by the receiver, which means they still have to move through the configured pipeline to be exported. For more information on tracing and configuration of a OpenTelemetry receiver see https://opentelemetry.io/docs/collector/configuration/#receivers.
See the Exporters section see spans that have made it through the pipeline and are exported.
Depending the configured processors, received spans might be dropped and not exported. For more information on configuring processors see https://opentelemetry.io/docs/collector/configuration/#processors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (receiver) (rate(otelcol_receiver_accepted_spans[1m]))
otel-collector: otel_span_refused
Spans refused per receiver
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (receiver) (rate(otelcol_receiver_refused_spans[1m]))
OpenTelemetry Collector: Exporters
otel-collector: otel_span_export_rate
Spans exported per exporter per minute
Shows the rate of spans being sent by the exporter
A Trace is a collection of spans. A Span represents a unit of work or operation. Spans are the building blocks of Traces. The rate of spans here indicates spans that have made it through the configured pipeline and have been sent to the configured export destination.
For more information on configuring a exporter for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#exporters.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (exporter) (rate(otelcol_exporter_sent_spans[1m]))
otel-collector: otel_span_export_failures
Span export failures by exporter
Shows the rate of spans failed to be sent by the configured reveiver. A number higher than 0 for a long period can indicate a problem with the exporter configuration or with the service that is being exported too
For more information on configuring a exporter for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#exporters.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (exporter) (rate(otelcol_exporter_send_failed_spans[1m]))
OpenTelemetry Collector: Queue Length
otel-collector: otelcol_exporter_queue_capacity
Exporter queue capacity
Shows the the capacity of the retry queue (in batches).
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (exporter) (rate(otelcol_exporter_queue_capacity{job=~"^.*"}[1m]))
otel-collector: otelcol_exporter_queue_size
Exporter queue size
Shows the current size of retry queue
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (exporter) (rate(otelcol_exporter_queue_size{job=~"^.*"}[1m]))
otel-collector: otelcol_exporter_enqueue_failed_spans
Exporter enqueue failed spans
Shows the rate of spans failed to be enqueued by the configured exporter. A number higher than 0 for a long period can indicate a problem with the exporter configuration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (exporter) (rate(otelcol_exporter_enqueue_failed_spans{job=~"^.*"}[1m]))
OpenTelemetry Collector: Processors
otel-collector: otelcol_processor_dropped_spans
Spans dropped per processor per minute
Shows the rate of spans dropped by the configured processor
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (processor) (rate(otelcol_processor_dropped_spans[1m]))
OpenTelemetry Collector: Collector resource usage
otel-collector: otel_cpu_usage
Cpu usage of the collector
Shows CPU usage as reported by the OpenTelemetry collector.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (job) (rate(otelcol_process_cpu_seconds{job=~"^.*"}[1m]))
otel-collector: otel_memory_resident_set_size
Memory allocated to the otel collector
Shows the allocated memory Resident Set Size (RSS) as reported by the OpenTelemetry collector.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (job) (rate(otelcol_process_memory_rss{job=~"^.*"}[1m]))
otel-collector: otel_memory_usage
Memory used by the collector
Shows how much memory is being used by the otel collector.
- High memory usage might indicate thad the configured pipeline is keeping a lot of spans in memory for processing
- Spans failing to be sent and the exporter is configured to retry
- A high batch count by using a batch processor
For more information on configuring processors for the OpenTelemetry collector see https://opentelemetry.io/docs/collector/configuration/#processors.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100402
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (job) (rate(otelcol_process_runtime_total_alloc_bytes{job=~"^.*"}[1m]))
OpenTelemetry Collector: Container monitoring (not available on server)
otel-collector: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod otel-collector
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p otel-collector
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' otel-collector
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the otel-collector container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs otel-collector
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^otel-collector.*"}) > 60)
otel-collector: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100501
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^otel-collector.*"}
otel-collector: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100502
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^otel-collector.*"}
otel-collector: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100503
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^otel-collector.*"}[1h]) + rate(container_fs_writes_total{name=~"^otel-collector.*"}[1h]))
OpenTelemetry Collector: Kubernetes monitoring (only available on Kubernetes)
otel-collector: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/otel-collector/otel-collector?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*otel-collector"}) / count by (app) (up{app=~".*otel-collector"}) * 100
Embeddings
Handles embeddings searches.
To see this dashboard, visit /-/debug/grafana/d/embeddings/embeddings
on your Sourcegraph instance.
Embeddings: Site configuration client update latency
embeddings: embeddings_site_configuration_duration_since_last_successful_update_by_instance
Duration since last successful site configuration update (by instance)
The duration since the configuration client used by the "embeddings" service last successfully updated its site configuration. Long durations could indicate issues updating the site configuration.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100000
on your Sourcegraph instance.
Technical details
Query:
SHELLsrc_conf_client_time_since_last_successful_update_seconds{job=~`.*embeddings`,instance=~`${instance:regex}`}
embeddings: embeddings_site_configuration_duration_since_last_successful_update_by_instance
Maximum duration since last successful site configuration update (all "embeddings" instances)
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100001
on your Sourcegraph instance.
Technical details
Query:
SHELLmax(max_over_time(src_conf_client_time_since_last_successful_update_seconds{job=~`.*embeddings`,instance=~`${instance:regex}`}[1m]))
Embeddings: Database connections
embeddings: max_open_conns
Maximum open
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100100
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_max_open{app_name="embeddings"})
embeddings: open_conns
Established
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100101
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_open{app_name="embeddings"})
embeddings: in_use
Used
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100110
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_in_use{app_name="embeddings"})
embeddings: idle
Idle
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100111
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (src_pgsql_conns_idle{app_name="embeddings"})
embeddings: mean_blocked_seconds_per_conn_request
Mean blocked seconds per conn request
Refer to the alerts reference for 2 alerts related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100120
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_blocked_seconds{app_name="embeddings"}[5m])) / sum by (app_name, db_name) (increase(src_pgsql_conns_waited_for{app_name="embeddings"}[5m]))
embeddings: closed_max_idle
Closed by SetMaxIdleConns
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100130
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle{app_name="embeddings"}[5m]))
embeddings: closed_max_lifetime
Closed by SetConnMaxLifetime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100131
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_lifetime{app_name="embeddings"}[5m]))
embeddings: closed_max_idle_time
Closed by SetConnMaxIdleTime
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100132
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by (app_name, db_name) (increase(src_pgsql_conns_closed_max_idle_time{app_name="embeddings"}[5m]))
Embeddings: Container monitoring (not available on server)
embeddings: container_missing
Container missing
This value is the number of times a container has not been seen for more than one minute. If you observe this value change independent of deployment events (such as an upgrade), it could indicate pods are being OOM killed or terminated for some other reasons.
- Kubernetes:
- Determine if the pod was OOM killed using
kubectl describe pod embeddings
(look forOOMKilled: true
) and, if so, consider increasing the memory limit in the relevantDeployment.yaml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingkubectl logs -p embeddings
.
- Determine if the pod was OOM killed using
- Docker Compose:
- Determine if the pod was OOM killed using
docker inspect -f '\{\{json .State\}\}' embeddings
(look for"OOMKilled":true
) and, if so, consider increasing the memory limit of the embeddings container indocker-compose.yml
. - Check the logs before the container restarted to see if there are
panic:
messages or similar usingdocker logs embeddings
(note this will include logs from the previous and currently running container).
- Determine if the pod was OOM killed using
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100200
on your Sourcegraph instance.
Technical details
Query:
SHELLcount by(name) ((time() - container_last_seen{name=~"^embeddings.*"}) > 60)
embeddings: container_cpu_usage
Container cpu usage total (1m average) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100201
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_cpu_usage_percentage_total{name=~"^embeddings.*"}
embeddings: container_memory_usage
Container memory usage by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100202
on your Sourcegraph instance.
Technical details
Query:
SHELLcadvisor_container_memory_usage_percentage_total{name=~"^embeddings.*"}
embeddings: fs_io_operations
Filesystem reads and writes rate by instance over 1h
This value indicates the number of filesystem read and write operations by containers of this service. When extremely high, this can indicate a resource usage problem, or can cause problems with the service itself, especially if high values or spikes correlate with {{CONTAINER_NAME}} issues.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100203
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(name) (rate(container_fs_reads_total{name=~"^embeddings.*"}[1h]) + rate(container_fs_writes_total{name=~"^embeddings.*"}[1h]))
Embeddings: Provisioning indicators (not available on server)
embeddings: provisioning_container_cpu_usage_long_term
Container cpu usage total (90th percentile over 1d) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100300
on your Sourcegraph instance.
Technical details
Query:
SHELLquantile_over_time(0.9, cadvisor_container_cpu_usage_percentage_total{name=~"^embeddings.*"}[1d])
embeddings: provisioning_container_memory_usage_long_term
Container memory usage (1d maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100301
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^embeddings.*"}[1d])
embeddings: provisioning_container_cpu_usage_short_term
Container cpu usage total (5m maximum) across all cores by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100310
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_cpu_usage_percentage_total{name=~"^embeddings.*"}[5m])
embeddings: provisioning_container_memory_usage_short_term
Container memory usage (5m maximum) by instance
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100311
on your Sourcegraph instance.
Technical details
Query:
SHELLmax_over_time(cadvisor_container_memory_usage_percentage_total{name=~"^embeddings.*"}[5m])
embeddings: container_oomkill_events_total
Container OOMKILL events total by instance
This value indicates the total number of times the container main process or child processes were terminated by OOM killer. When it occurs frequently, it is an indicator of underprovisioning.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100312
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by (name) (container_oom_events_total{name=~"^embeddings.*"})
Embeddings: Golang runtime monitoring
embeddings: go_goroutines
Maximum active goroutines
A high value here indicates a possible goroutine leak.
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100400
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_goroutines{job=~".*embeddings"})
embeddings: go_gc_duration_seconds
Maximum go garbage collection duration
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100401
on your Sourcegraph instance.
Technical details
Query:
SHELLmax by(instance) (go_gc_duration_seconds{job=~".*embeddings"})
Embeddings: Kubernetes monitoring (only available on Kubernetes)
embeddings: pods_available_percentage
Percentage pods available
Refer to the alerts reference for 1 alert related to this panel.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100500
on your Sourcegraph instance.
Technical details
Query:
SHELLsum by(app) (up{app=~".*embeddings"}) / count by (app) (up{app=~".*embeddings"}) * 100
Embeddings: Cache
embeddings: hit_ratio
Hit ratio of the embeddings cache
A low hit rate indicates your cache is not well utilized. Consider increasing the cache size.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100600
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_embeddings_cache_hit_count[30m]) / (rate(src_embeddings_cache_hit_count[30m]) + rate(src_embeddings_cache_miss_count[30m]))
embeddings: missed_bytes
Bytes fetched due to a cache miss
A high volume of misses indicates that the many searches are not hitting the cache. Consider increasing the cache size.
This panel has no related alerts.
To see this panel, visit /-/debug/grafana/d/embeddings/embeddings?viewPanel=100601
on your Sourcegraph instance.
Technical details
Query:
SHELLrate(src_embeddings_cache_miss_bytes[10m])