Library

How to calculate metadata size in Firebird SQL

(c) Kuzmenko Dmitry, 20-SEP-2025
in Portuguese (Como calcular o tamanho dos metadados no Firebird SQL

Firebird employs several distinct caches: the database cache, sort cache, prepared statements cache, and metadata cache. While the first three can be configured through firebird.conf and databases.conf, the metadata cache operates differently—it cannot be regulated through these configuration files.

The metadata cache stores compiled BLR (Binary Language Representation)—the binary code generated from PSQL text—which is loaded when accessing tables (computed fields, defaults, etc.), triggers, procedures, and views. When PSQL code is saved (via CREATE or ALTER statements), Firebird generates and stores the corresponding BLR code, which is then executed when the object is accessed. During runtime, the original SQL/PSQL source code is not used.

Architectural Evolution

In Classic and SuperClassic architectures, each connection functions as its own "server," meaning metadata is loaded exclusively into that connection's memory. SuperServer 2.x introduced a shared metadata cache, but this approach created contention issues. Additionally, applying modified metadata required disconnecting all database connections and restarting them.

Firebird 3.0 reintroduced separate metadata caches for each connection, even in SuperServer architecture—a design that persists in versions 4.0 and 5.0.

Estimating Metadata Cache Size

Unlike other caches with configurable sizes, determining the metadata cache size isn't straightforward. This becomes particularly relevant when working with databases containing thousands of tables, triggers, and procedures.

While explicit measurement is challenging, you can use these approximation methods:

Method 1: Extract Metadata to SQL Script Extract all database metadata using:

  isql -x database > script.sql

This creates database_meta containing only the database structure without data. However, this includes both SQL metadata and BLR together.

Both methods provide an upper-bound estimate (with overhead) of the metadata volume that each connection might allocate if it accessed all database objects simultaneously.

Keep in mind that connections load metadata selectively: if a connection calls only one procedure, only that procedure is loaded into metadata memory, along with the tables it accesses and any triggers on those tables.

Direct Measurement

Despite these limitations, there is a way to determine the actual memory allocated for metadata. This can be accomplished with the following query:  
select
    d.mon$page_buffers pg_buffers
    ,d.mon$page_size pg_size
    ,iif(m.memo_db_used = 0, 'dedicated', 'shared') pg_cache_type
    ,d.mon$page_buffers * d.mon$page_size * iif( m.memo_db_used = 0, total_attachments_cnt, 1 ) as page_cache_size
    ,m.memo_used_att - (memo_used_trn + memo_used_stm) - iif( m.memo_db_used = 0, d.mon$page_buffers * d.mon$page_size * total_attachments_cnt, 0)  as meta_cache_size
    ,m.memo_db_used
    ,m.memo_db_allo
    ,m.memo_used_att
    ,m.memo_used_trn
    ,m.memo_used_stm
    ,m.total_attachments_cnt
    ,m.active_attachments_cnt
    ,m.page_cache_operating_stm_cnt
    ,m.data_transfer_paused_stm_cnt
from (
    select
        sum( iif( u.stat_gr = 0, m.mon$memory_used, 0) ) memo_db_used -- SC/CS: 0; SS: >0
       ,sum( iif( u.stat_gr = 0, m.mon$memory_allocated, 0) ) memo_db_allo -- SC/CS: 0; SS: >0
       ,sum( iif( u.stat_gr = 1, m.mon$memory_used, 0) ) memo_used_att
       ,sum( iif( u.stat_gr = 2, m.mon$memory_used, 0) ) memo_used_trn
       ,sum( iif( u.stat_gr = 3, m.mon$memory_used, 0) ) memo_used_stm
       ,sum( iif( u.stat_gr = 1, 1, 0 ) ) total_attachments_cnt
       ,sum( iif( u.stat_gr = 1 and u.state = 1, 1, 0 ) ) active_attachments_cnt
       ,sum( iif( u.stat_gr = 2 and u.state = 1, 1, 0 ) ) active_transactions_cnt
       ,sum( iif( u.stat_gr = 3 and u.state = 1, 1, 0 ) ) page_cache_operating_stm_cnt --  server_side_run_stm_cnt
       ,sum( iif( u.stat_gr = 3 and u.state = 2, 1, 0 ) ) data_transfer_paused_stm_cnt -- data_transf_run_stm_cnt
    from mon$memory_usage m
    join
    (
        select 0 as stat_gr, m.mon$stat_id as stat_id, null as att_id, null as state
        from mon$memory_usage m
        where m.mon$stat_group =0
        UNION ALL
        select 1 as stat_gr, a.mon$stat_id as stat_id, a.mon$attachment_id as att_id, a.mon$state as state
        from mon$attachments a
         -- added 07.05.2020, actual for SuperServer 3.x+:
         -- total_attachments_cnt must not include GC and CW
        where mon$remote_protocol is not null -- common for 2.5 and 3.x+
        -- FB 3.x+ only: a.mon$system_flag is distinct from 1
        UNION ALL
        select 2,            t.mon$stat_id, t.mon$attachment_id, t.mon$state
        from mon$transactions t
        UNION ALL
        select 3,            s.mon$stat_id, s.mon$attachment_id, s.mon$state
        from mon$statements s
        -- ?! --> where upper( s.mon$sql_text ) not similar to upper('EXECUTE[[:WHITESPACE:]]+BLOCK%')
    )  u
    on
        m.mon$stat_id = u.stat_id and
        m.mon$stat_group = u.stat_gr
) m
cross join mon$database d

For example, when run against the standard employee.fdb database, it returns 614 kilobytes in version 3.0 and 1 megabyte in version 5.0.

The increase in version 5.0 reflects additions to the metadata structure: more mon$ tables with additional columns. The query accounts for both the database metadata itself and the size of mon$ tables when they are accessed.

Focus on the META_CACHE_SIZE column, which displays the individual metadata cache size per connection. The MEMO_DB_ALLOCATED column shows the total metadata allocation across all active connections (in SuperServer architecture).

For reference, the query also displays other memory metrics, including allocations for transactions, SQL queries, and related operations.