❄️
Data Flakes

Back

Running Snowflake at scale means keeping on top of cost, performance, security, and governance simultaneously. The platform surfaces an enormous amount of operational intelligence through the SNOWFLAKE.ACCOUNT_USAGE schema, but knowing which views to query — and how to interpret what comes back — takes time to learn.

In this article, we will walk through a comprehensive set of administration queries covering every major area of Snowflake platform management, from credit monitoring and warehouse performance through to data governance and real-time session analysis. Whether you are a Head of Data building an operational runbook or a data engineer who has just been handed the admin keys, this toolkit will give you visibility across your entire account.

Features change from time to time with new features being added regularly, it is recommended that you review the documentation for the latest on what specific features are included with any of the Editions.

Prerequisites and Important Caveats#

Before running any of these queries, there are two things you need to understand.

Role requirements: Most ACCOUNT_USAGE views require the ACCOUNTADMIN role, or a role that has been granted IMPORTED PRIVILEGES on the SNOWFLAKE database. Where possible, grant IMPORTED PRIVILEGES to a dedicated monitoring role rather than running everything as ACCOUNTADMIN.

Latency: ACCOUNT_USAGE views have up to 3 hours of latency and retain data for 1 year. They are excellent for trend analysis and historical investigation but not suitable for real-time alerting. For zero-latency queries, use INFORMATION_SCHEMA instead — we cover those at the end of this article.

graph TD A[Monitoring Need] --> B{Time Sensitivity?} B -->|Historical / Trends| C[ACCOUNT_USAGE<br/>Up to 3h latency<br/>1 year retention] B -->|Real-time / Immediate| D[INFORMATION_SCHEMA<br/>Zero latency<br/>Short retention] C --> E[Cost Analysis<br/>Security Audits<br/>Governance Reviews] D --> F[Running Queries<br/>Current Sessions<br/>Live Storage]

1. Cost and Credit Monitoring#

Credit consumption is the most important signal in any Snowflake account. Left unchecked, idle warehouses, runaway tasks, and aggressive clustering can drive costs far beyond expectations.

Credit Consumption by Warehouse#

Let’s start with the most fundamental cost query — total credits per warehouse over the last 30 days:

-- Total credit consumption by warehouse (last 30 days)
SELECT
    warehouse_name,
    SUM(credits_used) AS total_credits,
    SUM(credits_used_compute) AS compute_credits,
    SUM(credits_used_cloud_services) AS cloud_services_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY warehouse_name
ORDER BY total_credits DESC;
sql

The split between compute_credits and cloud_services_credits is worth paying attention to. Cloud services credits cover query compilation, metadata operations, and authentication. If cloud services consistently exceed 10% of your compute spend, it can indicate an excessive number of small queries or frequent warehouse starts — both worth investigating.

Daily Credit Trend#

Aggregating to a daily trend reveals patterns — weekly cycles, unusual spikes, or gradual growth that can be difficult to spot from totals alone:

-- Daily credit trend (last 30 days)
SELECT
    start_time::DATE AS usage_date,
    SUM(credits_used) AS total_credits,
    SUM(credits_used_compute) AS compute_credits,
    SUM(credits_used_cloud_services) AS cloud_services_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY usage_date
ORDER BY usage_date;
sql

Idle Credit Waste#

This query is one of the most valuable for cost reduction — it shows credits consumed by warehouses that were running but not executing queries:

-- Warehouse idle cost (credits consumed while not running queries, last 10 days)
SELECT
    warehouse_name,
    SUM(credits_used_compute) AS total_compute_credits,
    SUM(credits_attributed_compute_queries) AS query_attributed_credits,
    SUM(credits_used_compute) - SUM(credits_attributed_compute_queries) AS idle_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE start_time >= DATEADD('day', -10, CURRENT_TIMESTAMP())
GROUP BY warehouse_name
ORDER BY idle_credits DESC;
sql

Why this matters: If idle_credits is significant, your auto-suspend settings are likely too conservative. Most interactive warehouses can safely auto-suspend after 60 seconds without impacting user experience.

Beyond Warehouses: Serverless and Other Services#

Not all credits come from warehouses. Serverless tasks, automatic clustering, Snowpipe, and Cortex AI functions each have their own consumption footprint:

-- All metering across services (compute, serverless, etc.) last 30 days
SELECT
    service_type,
    SUM(credits_used) AS total_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.METERING_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY service_type
ORDER BY total_credits DESC;
sql
-- Serverless task credit consumption (last 30 days)
SELECT
    task_name,
    database_name,
    schema_name,
    SUM(credits_used) AS total_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.SERVERLESS_TASK_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY task_name, database_name, schema_name
ORDER BY total_credits DESC;
sql
-- Automatic clustering credit consumption (last 30 days)
SELECT
    table_name,
    database_name,
    schema_name,
    SUM(credits_used) AS total_credits,
    SUM(num_bytes_reclustered) AS bytes_reclustered
FROM SNOWFLAKE.ACCOUNT_USAGE.AUTOMATIC_CLUSTERING_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY table_name, database_name, schema_name
ORDER BY total_credits DESC;
sql
-- Cortex AI functions usage (last 30 days)
SELECT
    function_name,
    model_name,
    COUNT(*) AS total_calls,
    SUM(credits) AS total_credits
FROM SNOWFLAKE.ACCOUNT_USAGE.CORTEX_AI_FUNCTIONS_USAGE_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY function_name, model_name
ORDER BY total_credits DESC;
sql

2. Storage Monitoring#

Storage costs in Snowflake compound over time, particularly if Time Travel retention periods are generous and tables are never cleaned up.

Account-Level Storage Trend#

-- Overall account storage trend (monthly)
SELECT
    DATE_TRUNC('month', usage_date) AS usage_month,
    AVG(storage_bytes) / POWER(1024, 4) AS avg_storage_tb,
    AVG(stage_bytes) / POWER(1024, 4) AS avg_stage_tb,
    AVG(failsafe_bytes) / POWER(1024, 4) AS avg_failsafe_tb,
    AVG(storage_bytes + stage_bytes + failsafe_bytes) / POWER(1024, 4) AS avg_billable_tb
FROM SNOWFLAKE.ACCOUNT_USAGE.STORAGE_USAGE
GROUP BY usage_month
ORDER BY usage_month;
sql

Top Tables by Storage#

Identifying the largest tables helps prioritise cleanup efforts. The breakdown into active, time travel, and failsafe bytes shows exactly where storage is being consumed:

-- Top 25 largest tables by storage
SELECT
    table_catalog AS database_name,
    table_schema AS schema_name,
    table_name,
    active_bytes / POWER(1024, 3) AS active_gb,
    time_travel_bytes / POWER(1024, 3) AS time_travel_gb,
    failsafe_bytes / POWER(1024, 3) AS failsafe_gb,
    (active_bytes + time_travel_bytes + failsafe_bytes) / POWER(1024, 3) AS total_gb
FROM SNOWFLAKE.ACCOUNT_USAGE.TABLE_STORAGE_METRICS
WHERE active_bytes > 0
ORDER BY total_gb DESC
LIMIT 25;
sql

Tip: Tables with a high failsafe_gb relative to active_gb are candidates for reducing their DATA_RETENTION_TIME setting. Transient tables carry no failsafe overhead — worth considering for staging or temporary datasets.


3. Security and Access#

Security monitoring is where ACCOUNT_USAGE really earns its keep. Login history, failed attempts, MFA compliance, and role hierarchies are all queryable — and all essential for a secure platform.

Login History and Failed Attempts#

-- Login history summary (last 30 days) - who is logging in and how
SELECT
    user_name,
    reported_client_type,
    first_authentication_factor,
    second_authentication_factor,
    COUNT(*) AS login_count,
    SUM(IFF(is_success = 'YES', 1, 0)) AS successful_logins,
    SUM(IFF(is_success = 'NO', 1, 0)) AS failed_logins
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE event_timestamp >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY user_name, reported_client_type, first_authentication_factor, second_authentication_factor
ORDER BY failed_logins DESC, login_count DESC;
sql
-- Failed login attempts (last 7 days) - potential security concern
SELECT
    event_timestamp,
    user_name,
    client_ip,
    reported_client_type,
    error_code,
    error_message
FROM SNOWFLAKE.ACCOUNT_USAGE.LOGIN_HISTORY
WHERE is_success = 'NO'
    AND event_timestamp >= DATEADD('day', -7, CURRENT_TIMESTAMP())
ORDER BY event_timestamp DESC;
sql

A cluster of failed logins from an unfamiliar IP against a service account warrants immediate investigation. Schedule this query to run daily as part of your security operations routine.

MFA and Dormant User Compliance#

-- Users without MFA enabled
SELECT
    name AS user_name,
    login_name,
    email,
    created_on,
    last_success_login,
    has_mfa
FROM SNOWFLAKE.ACCOUNT_USAGE.USERS
WHERE deleted_on IS NULL
    AND has_mfa = FALSE
ORDER BY last_success_login DESC;
sql
-- Users who have not logged in for 90+ days (candidates for deactivation)
SELECT
    name AS user_name,
    login_name,
    email,
    created_on,
    last_success_login,
    disabled,
    DATEDIFF('day', last_success_login, CURRENT_TIMESTAMP()) AS days_since_last_login
FROM SNOWFLAKE.ACCOUNT_USAGE.USERS
WHERE deleted_on IS NULL
    AND (last_success_login < DATEADD('day', -90, CURRENT_TIMESTAMP())
         OR last_success_login IS NULL)
ORDER BY last_success_login ASC NULLS FIRST;
sql

From experience, I would recommend considering the below: run the dormant user query as part of a monthly access review and disable rather than delete accounts where there is any uncertainty — deletion is irreversible, but disabling preserves the audit trail.

Role Hierarchy and Direct Grants#

-- All roles and their grant hierarchy
SELECT
    grantee_name AS role_name,
    name AS granted_role,
    granted_by,
    created_on
FROM SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_ROLES
WHERE privilege = 'USAGE'
    AND granted_on = 'ROLE'
    AND deleted_on IS NULL
ORDER BY grantee_name, granted_role;
sql
-- Privileges granted directly to users (should be minimised; use roles instead)
SELECT
    grantee_name AS user_name,
    role
FROM SNOWFLAKE.ACCOUNT_USAGE.GRANTS_TO_USERS
WHERE deleted_on IS NULL
ORDER BY user_name, role;
sql

Best practice: Users should receive access through roles, not direct grants. Direct grants are harder to audit and revoke, and they bypass your role hierarchy. Any users appearing in the direct grants query should be reviewed and migrated to role-based access.

Network and Data Policies#

-- Network policies configured
SELECT
    name AS policy_name,
    created,
    owner,
    allowed_ip_list,
    blocked_ip_list,
    comment
FROM SNOWFLAKE.ACCOUNT_USAGE.NETWORK_POLICIES
WHERE deleted IS NULL
ORDER BY created;
sql
-- Masking policies in use
SELECT
    policy_name,
    policy_schema,
    policy_catalog AS database_name,
    policy_owner,
    created
FROM SNOWFLAKE.ACCOUNT_USAGE.MASKING_POLICIES
WHERE deleted IS NULL
ORDER BY created;
sql

4. Warehouse Performance#

Understanding how warehouses behave under load is essential for right-sizing and avoiding queue buildup.

Identifying Overloaded Warehouses#

-- Warehouse load analysis (last 7 days) - identify overloaded warehouses
SELECT
    warehouse_name,
    DATE_TRUNC('hour', start_time) AS hour,
    AVG(avg_running) AS avg_running_queries,
    AVG(avg_queued_load) AS avg_queued_overload,
    AVG(avg_queued_provisioning) AS avg_queued_provisioning,
    AVG(avg_blocked) AS avg_blocked
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_LOAD_HISTORY
WHERE start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
GROUP BY warehouse_name, hour
HAVING avg_queued_overload > 0
ORDER BY avg_queued_overload DESC
LIMIT 50;
sql

The HAVING avg_queued_overload > 0 filter surfaces only the hours where queries were actually waiting. If a warehouse shows consistent queueing at the same time each day, that is a strong signal to enable multi-cluster warehouses or scale up during that window.

Utilisation Summary#

-- Warehouse utilisation summary (last 30 days)
SELECT
    warehouse_name,
    COUNT(DISTINCT start_time::DATE) AS active_days,
    SUM(credits_used) AS total_credits,
    ROUND(SUM(credits_used) / COUNT(DISTINCT start_time::DATE), 2) AS avg_credits_per_day
FROM SNOWFLAKE.ACCOUNT_USAGE.WAREHOUSE_METERING_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY warehouse_name
ORDER BY total_credits DESC;
sql

5. Query Performance and History#

QUERY_HISTORY is the most powerful view in ACCOUNT_USAGE for diagnosing performance problems.

Slowest Queries#

Interpreting the results: Check whether queued_seconds is a large proportion of elapsed_seconds — if so, the problem is warehouse capacity, not query efficiency. If compilation_seconds is high, the query may be overly complex or benefit from query caching. If gb_scanned is high relative to rows returned, consider adding a cluster key.

High-Spillage Queries#

Spillage to remote storage is an expensive warning sign — it indicates that a query exceeded the warehouse’s memory and began writing intermediate results to disk:

-- Queries with high spillage (indicating need for larger warehouse)
SELECT
    query_id,
    user_name,
    warehouse_name,
    warehouse_size,
    bytes_spilled_to_local_storage / POWER(1024, 3) AS gb_spilled_local,
    bytes_spilled_to_remote_storage / POWER(1024, 3) AS gb_spilled_remote,
    total_elapsed_time / 1000 AS elapsed_seconds,
    query_text
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
    AND (bytes_spilled_to_local_storage > 0 OR bytes_spilled_to_remote_storage > 0)
ORDER BY bytes_spilled_to_remote_storage DESC
LIMIT 25;
sql

Remote spillage (to S3/Azure Blob) is much slower and more expensive than local spillage. Either scale up the warehouse for those queries or investigate whether they can be rewritten to reduce the intermediate dataset size.

Most Frequently Run Queries#

Using query_parameterized_hash groups queries with different literal values but the same structure — useful for finding patterns rather than individual executions:

-- Most frequently run queries (last 7 days, by query hash)
SELECT
    query_parameterized_hash,
    ANY_VALUE(query_text) AS sample_query,
    COUNT(*) AS execution_count,
    AVG(total_elapsed_time) / 1000 AS avg_elapsed_seconds,
    SUM(credits_used_cloud_services) AS total_cloud_credits,
    ANY_VALUE(user_name) AS sample_user,
    ANY_VALUE(warehouse_name) AS sample_warehouse
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
    AND execution_status = 'SUCCESS'
GROUP BY query_parameterized_hash
ORDER BY execution_count DESC
LIMIT 25;
sql

Failed Queries and Recurring Errors#

-- Failed queries (last 7 days) - identify recurring errors
SELECT
    error_code,
    error_message,
    COUNT(*) AS occurrence_count,
    COUNT(DISTINCT user_name) AS affected_users,
    ANY_VALUE(query_text) AS sample_query
FROM SNOWFLAKE.ACCOUNT_USAGE.QUERY_HISTORY
WHERE start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
    AND execution_status = 'FAIL'
GROUP BY error_code, error_message
ORDER BY occurrence_count DESC
LIMIT 25;
sql

6. Platform Inventory#

Before you can govern a platform you need to know exactly what is on it. These queries give you a complete account inventory.

Databases and Schemas#

-- All databases
SELECT
    database_name,
    database_owner,
    created,
    last_altered,
    retention_time,
    is_transient,
    comment
FROM SNOWFLAKE.ACCOUNT_USAGE.DATABASES
WHERE deleted IS NULL
ORDER BY database_name;
sql
-- Table count and type per schema
SELECT
    table_catalog AS database_name,
    table_schema AS schema_name,
    table_type,
    COUNT(*) AS table_count
FROM SNOWFLAKE.ACCOUNT_USAGE.TABLES
WHERE deleted IS NULL
GROUP BY table_catalog, table_schema, table_type
ORDER BY database_name, schema_name, table_type;
sql

Users, Roles, and Resource Monitors#

-- All users in the account
SELECT
    name AS user_name,
    login_name,
    email,
    default_role,
    default_warehouse,
    created_on,
    last_success_login,
    disabled,
    has_mfa,
    has_password
FROM SNOWFLAKE.ACCOUNT_USAGE.USERS
WHERE deleted_on IS NULL
ORDER BY user_name;
sql
-- Resource monitors configured
SELECT
    name AS monitor_name,
    credit_quota,
    used_credits,
    remaining_credits,
    notify,
    suspend,
    suspend_immediate,
    owner,
    created,
    warehouses
FROM SNOWFLAKE.ACCOUNT_USAGE.RESOURCE_MONITORS;
sql

Tasks and Pipes#

-- All tasks (latest version per task)
SELECT
    database_name,
    schema_name,
    name AS task_name,
    owner,
    warehouse_name,
    schedule,
    state,
    graph_version_created_on
FROM SNOWFLAKE.ACCOUNT_USAGE.TASK_VERSIONS
QUALIFY ROW_NUMBER() OVER (PARTITION BY database_name, schema_name, name ORDER BY graph_version DESC) = 1
ORDER BY database_name, schema_name, task_name;
sql
-- All pipes (Snowpipe)
SELECT
    pipe_catalog AS database_name,
    pipe_schema AS schema_name,
    pipe_name,
    pipe_owner,
    is_autoingest_enabled,
    notification_channel_name,
    created
FROM SNOWFLAKE.ACCOUNT_USAGE.PIPES
WHERE deleted IS NULL
ORDER BY database_name, schema_name, pipe_name;
sql

7. Data Governance#

Governance monitoring ensures that tagging, classification, and data sharing are operating as intended.

Tag References#

-- Tags in use across the account
SELECT
    tag_database,
    tag_schema,
    tag_name,
    tag_value,
    object_database,
    object_schema,
    object_name,
    domain AS object_type,
    column_name
FROM SNOWFLAKE.ACCOUNT_USAGE.TAG_REFERENCES
WHERE object_deleted IS NULL
ORDER BY tag_database, tag_name, object_name;
sql

Policy References#

-- Policy references (which tables/columns have policies applied)
SELECT
    policy_db,
    policy_schema,
    policy_name,
    policy_kind,
    ref_database_name,
    ref_schema_name,
    ref_entity_name,
    ref_entity_domain,
    ref_column_name
FROM SNOWFLAKE.ACCOUNT_USAGE.POLICY_REFERENCES
ORDER BY policy_kind, policy_name;
sql

This query is particularly useful for compliance audits — you can quickly verify that every sensitive column has a masking policy applied, or that row access policies are attached to the tables that require them.

Data Classification (Enterprise Edition)#

-- Data classification results
SELECT
    database_name,
    schema_name,
    table_name,
    result,
    status,
    trigger_type,
    last_classified_on
FROM SNOWFLAKE.ACCOUNT_USAGE.DATA_CLASSIFICATION_LATEST
ORDER BY database_name, schema_name, table_name;
sql

8. Data Loading and Movement#

COPY INTO Load History#

-- Recent COPY INTO load history (last 7 days)
SELECT
    table_catalog_name AS database_name,
    table_schema_name AS schema_name,
    table_name,
    file_name,
    status,
    row_count,
    row_parsed,
    error_count,
    last_load_time
FROM SNOWFLAKE.ACCOUNT_USAGE.COPY_HISTORY
WHERE last_load_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
ORDER BY last_load_time DESC
LIMIT 100;
sql

Cross-Region Data Transfer#

Cross-region and cross-cloud data transfers incur additional charges. This query surfaces those transfers so you can understand egress costs and, where possible, co-locate compute and storage:

-- Data transfer history (last 30 days)
SELECT
    start_time::DATE AS transfer_date,
    source_cloud,
    source_region,
    target_cloud,
    target_region,
    transfer_type,
    SUM(bytes_transferred) / POWER(1024, 3) AS gb_transferred
FROM SNOWFLAKE.ACCOUNT_USAGE.DATA_TRANSFER_HISTORY
WHERE start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
GROUP BY transfer_date, source_cloud, source_region, target_cloud, target_region, transfer_type
ORDER BY transfer_date DESC;
sql

9. Task and Pipeline Monitoring#

Task Execution History#

-- Task execution history (last 7 days)
SELECT
    database_name,
    schema_name,
    name AS task_name,
    state,
    scheduled_time,
    completed_time,
    error_code,
    error_message,
    DATEDIFF('second', scheduled_time, completed_time) AS duration_seconds
FROM SNOWFLAKE.ACCOUNT_USAGE.TASK_HISTORY
WHERE scheduled_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
ORDER BY scheduled_time DESC
LIMIT 100;
sql
-- Failed tasks (last 7 days)
SELECT
    database_name,
    schema_name,
    name AS task_name,
    scheduled_time,
    error_code,
    error_message
FROM SNOWFLAKE.ACCOUNT_USAGE.TASK_HISTORY
WHERE scheduled_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
    AND state = 'FAILED'
ORDER BY scheduled_time DESC;
sql

Dynamic Table Refresh History#

-- Dynamic table refresh history (last 7 days)
SELECT
    database_name,
    schema_name,
    name AS dynamic_table_name,
    state,
    refresh_start_time,
    refresh_end_time,
    DATEDIFF('second', refresh_start_time, refresh_end_time) AS refresh_duration_seconds
FROM SNOWFLAKE.ACCOUNT_USAGE.DYNAMIC_TABLE_REFRESH_HISTORY
WHERE refresh_start_time >= DATEADD('day', -7, CURRENT_TIMESTAMP())
ORDER BY refresh_start_time DESC
LIMIT 100;
sql

10. Sessions, Access Patterns, and Account Health#

Session and Login Patterns#

-- Active sessions by user and client type (last 24 hours)
SELECT
    user_name,
    client_application_id,
    COUNT(*) AS session_count,
    MIN(created_on) AS earliest_session,
    MAX(created_on) AS latest_session
FROM SNOWFLAKE.ACCOUNT_USAGE.SESSIONS
WHERE created_on >= DATEADD('day', -1, CURRENT_TIMESTAMP())
GROUP BY user_name, client_application_id
ORDER BY session_count DESC;
sql
-- Most accessed tables (last 30 days, via access history - Enterprise Edition)
SELECT
    obj.value:objectName::STRING AS object_name,
    obj.value:objectDomain::STRING AS object_type,
    COUNT(DISTINCT query_id) AS access_count,
    COUNT(DISTINCT user_name) AS distinct_users
FROM SNOWFLAKE.ACCOUNT_USAGE.ACCESS_HISTORY,
    LATERAL FLATTEN(input => base_objects_accessed) obj
WHERE query_start_time >= DATEADD('day', -30, CURRENT_TIMESTAMP())
    AND obj.value:objectDomain::STRING IN ('Table', 'View')
GROUP BY object_name, object_type
ORDER BY access_count DESC
LIMIT 25;
sql

Trust Center and Lock Contention#

-- Trust Center findings (security scanner results)
SELECT
    id,
    severity,
    state,
    created_on,
    updated_on
FROM SNOWFLAKE.ACCOUNT_USAGE.TRUST_CENTER_FINDINGS
WHERE created_on >= DATEADD('day', -30, CURRENT_TIMESTAMP())
ORDER BY severity, created_on DESC;
sql

11. Real-Time Queries with INFORMATION_SCHEMA#

When you need zero-latency information — for example, to investigate an active performance problem or see the current state of a table — use INFORMATION_SCHEMA instead of ACCOUNT_USAGE. The trade-off is shorter retention, but for operational use that is rarely a concern.

Currently Running Queries#

-- Currently running queries (real-time)
SELECT
    query_id,
    query_text,
    user_name,
    warehouse_name,
    execution_status,
    start_time,
    DATEDIFF('second', start_time, CURRENT_TIMESTAMP()) AS running_seconds
FROM TABLE(INFORMATION_SCHEMA.QUERY_HISTORY_BY_WAREHOUSE(
    WAREHOUSE_NAME => CURRENT_WAREHOUSE(),
    RESULT_LIMIT => 50
))
WHERE execution_status = 'RUNNING'
ORDER BY start_time;
sql

Recent Query History (Last Hour)#

Real-Time Table and Column Metadata#

-- All tables in a specific database (real-time)
-- Replace <DATABASE_NAME> with your target database
SELECT
    table_catalog,
    table_schema,
    table_name,
    table_type,
    row_count,
    bytes,
    created,
    last_altered
FROM <DATABASE_NAME>.INFORMATION_SCHEMA.TABLES
WHERE table_schema != 'INFORMATION_SCHEMA'
ORDER BY bytes DESC NULLS LAST;
sql
-- All columns for a specific table (real-time)
-- Replace placeholders with your target database, schema, and table
SELECT
    column_name,
    data_type,
    is_nullable,
    column_default,
    character_maximum_length,
    numeric_precision,
    numeric_scale,
    ordinal_position
FROM <DATABASE_NAME>.INFORMATION_SCHEMA.COLUMNS
WHERE table_schema = '<SCHEMA_NAME>'
    AND table_name = '<TABLE_NAME>'
ORDER BY ordinal_position;
sql

Best Practices#

From experience, I would recommend considering the below when building out your monitoring practice:

  1. Create a dedicated monitoring role: Grant IMPORTED PRIVILEGES on the SNOWFLAKE database to a MONITOR_ROLE and assign it to your operations users. Avoid running admin queries as ACCOUNTADMIN habitually.

  2. Schedule daily security checks: Automate the failed login, dormant user, and MFA compliance queries using Snowflake Tasks. Push results to a monitoring table or alert via email integration when thresholds are exceeded.

  3. Set resource monitors on every warehouse: Every warehouse should have a resource monitor with notification thresholds. Use SUSPEND_IMMEDIATE carefully — it cancels running queries — but having it as a backstop prevents runaway costs.

  4. Review idle credits weekly: The idle credit query (section 1) should be part of your weekly cost review. Idle credits represent the clearest opportunity for cost reduction without impacting workloads.

  5. Use query_parameterized_hash for pattern analysis: When investigating performance, group by hash rather than query text. Individual queries with different filter values will otherwise appear as separate problems.

  6. Remember the latency: Build dashboards on ACCOUNT_USAGE for trend analysis and alerting. Use INFORMATION_SCHEMA for operational investigation. Mixing the two without understanding their latency differences leads to misleading comparisons.

graph TD A[Daily Security Check] --> B[Failed Logins<br/>LOGIN_HISTORY] A --> C[MFA Compliance<br/>USERS] A --> D[Trust Center<br/>TRUST_CENTER_FINDINGS] E[Weekly Cost Review] --> F[Idle Credits<br/>WAREHOUSE_METERING_HISTORY] E --> G[Service Breakdown<br/>METERING_HISTORY] E --> H[Top Tables<br/>TABLE_STORAGE_METRICS] I[Monthly Access Review] --> J[Dormant Users<br/>USERS] I --> K[Policy Coverage<br/>POLICY_REFERENCES] I --> L[Role Hierarchy<br/>GRANTS_TO_ROLES]

Conclusion#

SNOWFLAKE.ACCOUNT_USAGE is one of the most underutilised capabilities available to Snowflake administrators. The views covered in this article give you end-to-end visibility across cost, security, performance, and governance — the four pillars of responsible platform management.

The most effective approach is to build these queries into a regular operational cadence: daily security checks, weekly cost reviews, and monthly access audits. Snowflake Tasks make it straightforward to automate these routines and centralise the results for easy review.

Key Takeaways:

  • ACCOUNT_USAGE views have up to 3 hours of latency — use INFORMATION_SCHEMA for real-time needs
  • Idle warehouse credits are usually the fastest win in any cost optimisation exercise
  • Security monitoring — failed logins, dormant users, MFA compliance — should be automated, not manual
  • query_parameterized_hash makes it far easier to identify patterns in query performance at scale
  • Role-based access reviews and policy reference audits are essential for governance compliance

Features change from time to time with new features being added regularly, it is recommended that you review the documentation for the latest on what specific features are included with any of the Editions.

Further Reading#

Disclaimer

The information provided on this website is for general informational purposes only. While we strive to keep the information up to date and correct, there may be instances where information is outdated or links are no longer valid. We make no representations or warranties of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability with respect to the website or the information, products, services, or related graphics contained on the website for any purpose. Any reliance you place on such information is therefore strictly at your own risk.