Add new extension yagp hooks collector#1629
Conversation
Just create the overal project structure with a Makefile to generate protobufs, compile it into a shared library extension and install it
as a fundation to build on
- borrow GlowByte code to generate plan text and SessionInfo - borrow code from our in-house pg_stat_statements to generate query id and plan id - refactor code to follow common name conventions and identations
- do some minor refactoring to follow common naming convention - add additional message right after ExecutorStart hook
1) Query instrumentation 2) /proc/self/* stats
It allows finer granularity than executor hooks. Also removed some code duplication and data duplicaton
1. Initialize query instrumentation to NULL so that it can be properly checked later (temporary solution, need to find a proper fix) 2. Don't collect spillinfo on query end. Reason: a) it will always be zero and b) it could crash if we failed to enlarge a spillfile. Seems like we need some cummulative statistics for spillinfo. Need to check what explain analyze use.
1. Sync with protobuf changes to collect segment info 2. Remove noisy logging 3. Fix some missing node types in pg_stat_statements
Reason: when query info hook is called with status 'DONE' planstate is already deallocated by ExecutorEnd
1) Give higher gRPC timeouts to query dispatcher as losing messages there is more critical 2) If we've failed to send a message via gRPC we notify a background thread about it and refuse sending any new message until this thread re-establishes the lost connection
Don't collect system queries with empty query text and ccnt == 0
Rethrowing them might break other extensions and even query execution pipeline itself
Similarly to [1] add missing executor query info hooks. [1] open-gpdb/gpdb@87fc05d
Copy of [1] with additinal changed needed for Clouberry are described below: The testing C functions have changed to set-returning ones if comparing with [1] because we need a control over the place where function is executed - either on master or segments, and in Cloudberry these functions must return set of values so they were changed to return SETOF. [1] open-gpdb/gpdb@989ca06
Copy of [1] - send() may return -1 in case of an error, do not add -1 to total_bytes sent. [1] open-gpdb/gpdb@e1f6c08
The extension generates normalized query text and plan using jumbling functions. Those functions may fail when translating to wide character if the current locale cannot handle the character set. Fix changes functions that generate normalized query text/plan to noexcept versions so we can check if error occured and continute execution. The test checks that even when those functions fail, the plan is still executed. This test is partially taken from src/test/regress/gp_locale.sql.
Cloudberry builds treat compiler warnings as errors. For consistency, this behavior has been enabled in yagp_hooks_collector. This commit also fixes the warnings in yagp_hooks_collector.
We faced an issue - segments fail with backtrace ``` #7 0x00007f9b2adbf2e0 in set_qi_error_message (req=0x55f24a6011f0) at src/ProtoUtils.cpp:124 #8 0x00007f9b2adc30d9 in EventSender::collect_query_done (this=0x55f24a5489f0, query_desc=0x55f24a71ca68, status=METRICS_QUERY_ERROR) at src/EventSender.cpp:222 #9 0x00007f9b2adc23e1 in EventSender::query_metrics_collect (this=0x55f24a5489f0, status=METRICS_QUERY_ERROR, arg=0x55f24a71ca68) at src/EventSender.cpp:53 ``` the root cause here is we're trying to send info about error message in a hooks collector. For some queries ErrorData struckture could be NULL despite the fact that an error has occurred. it depends on error type and location of the error. So we should check if we had info about error details before using it.
4e8ec12 to
fb6d611
Compare
fb6d611 to
c446926
Compare
|
Right now we are not fully support all the features from #1085 but I believe one day solution will be won't be any worse. |
gpcontrib/yagp_hooks_collector/src/stat_statements_parser/README.md
Outdated
Show resolved
Hide resolved
gpcontrib/yagp_hooks_collector/src/stat_statements_parser/README.md
Outdated
Show resolved
Hide resolved
|
Hi all! I want to extend the description of this project a bit. Here we collect state of the query during the executor stages and send it via unix sockets. It was originally made for Greenplum, and I adapted it for Cloudberry here. For testing purposes only, we support writing data to the catalog table so it can be selected during regression tests. The tuples are logged even during transaction aborts, using frozen tuples (only for the tests). The state of the query is created before the executor_start stage and released after the executor_end stage. If the executor_end is missing for some unknown reason, the state is not freed. To fix this problem, new changes to the code outside of the project were introduced to correctly handle queries with a missing executor_end stage. |
* Rename internal names * Add gp-stats-collector test for rocky8 and deb
A diagnostic and monitoring extension for Cloudberry clusters
Briefly, the interaction scheme is:
Cloudberry(Run Executor hooks) ->hooks collector extension(Create protobuf messages and send them using UDS) ->local yagpcc agentThis is the part that sends data from the PG instance to an external agent.
External agent receives telemetry, store, aggregate and expose to the consumers. The agent source code is https://github.com/open-gpdb/yagpcc We're going to propose add it to Apache Cloudberry infrastructure later, after fixing all the tests.
This is quite similar to the #1085 idea. However, it has a different implementation. It is completely separate from database application and written in Go. Could be separately maintained. No influence to the main database, and so different development requirements.
Extension has pg_regress tests, they were copied from pg_stat_statements (PG18) and adjusted to the extension behaviour.
Original extension authors are Smyatkin-Maxim and NJrslv I left their commits as-is to have blame history commited source.