技能 supabase-postgres-best-practices
🐘

supabase-postgres-best-practices

安全 ⚡ 包含脚本⚙️ 外部命令🌐 网络访问

使用专家最佳实践优化Postgres性能

也可从以下获取: supabase

缓慢的查询和糟糕的架构设计困扰着Postgres应用。本技能提供来自Supabase专家的26条优先规则,可实现10-1000倍的性能提升。

支持: Claude Codex Code(CC)
🥉 73 青铜
1

下载技能 ZIP

2

在 Claude 中上传

前往 设置 → 功能 → 技能 → 上传技能

3

开启并开始使用

测试它

正在使用“supabase-postgres-best-practices”。 Review my Postgres query and suggest optimizations based on best practices: SELECT * FROM events WHERE user_id = 123 ORDER BY created_at DESC LIMIT 20

预期结果:

Based on query-missing-indexes.md rules, your query needs an index on the filter and order columns:

1. Add composite index:
```sql
CREATE INDEX events_user_id_created_at_idx
ON events (user_id, created_at DESC);
```

2. Expected improvement: Index scan reduces cost from 25,000 to 8.44 (100-1000x faster on large tables).

3. For LIMIT queries, consider usingcovering indexes to avoid table lookups if you only need specific columns.

正在使用“supabase-postgres-best-practices”。 Is this batch insert pattern following best practices? for (let i = 0; i < 1000; i++) { await db.query('INSERT INTO logs (message) VALUES ($1)', [logs[i]]); }

预期结果:

No, this violates data-batch-inserts.md rules. You have:

- 1000 individual round trips
- 1000 separate transactions
- Estimated 10-50x slower than necessary

Correct approach (from rules):
```sql
INSERT INTO logs (message) VALUES
($1), ($2), ($3), ... up to ~1000 rows;
```

Or use COPY for bulk imports (fastest method). See data-batch-inserts.md for examples.

正在使用“supabase-postgres-best-practices”。 Should I add an index on this foreign key column that gets queried occasionally?

预期结果:

Yes, absolutely. From schema-foreign-key-indexes.md:

**Impact: CRITICAL (5-20x improvement)**

Foreign key columns should almost always be indexed because:
1. JOINs on foreign keys cause sequential scans without indexes
2. DELETEs on parent tables require scanning child tables
3. Cascade operations become exponentially slower on large tables

The rule notes: Even occasional queries benefit from indexing foreign keys due to exponential scan costs.

安全审计

安全
v1 • 2/25/2026

All 710 static findings are false positives. This is a documentation-only skill containing Postgres best practices in Markdown format. The flagged patterns (backticks, MD5 references, URLs, system queries) are all legitimate SQL examples, documentation links, and monitoring queries. No executable code, no data exfiltration, no malicious intent detected.

37
已扫描文件
3,485
分析行数
8
发现项
1
审计总数
中风险问题 (1)
False Positive - SQL Code Blocks Flagged as External Commands
Static scanner detected 344 instances of 'Ruby/shell backtick execution' across Markdown files. Investigation confirms these are Markdown code blocks containing SQL examples (e.g., ```sql ... ```) and inline backticks for emphasis. No actual shell execution exists. Confidence: 0.98 - Direct evidence that backticks are Markdown syntax, not executable code.
低风险问题 (4)
False Positive - MD5 References Not Cryptographic Weakness
Static scanner flagged 46 instances of 'Weak cryptographic algorithm (MD5)'. Investigation confirms these are: (1) MD5 checksums in documentation URLs (Supabase/PostgreSQL docs), (2) File integrity hashes in metadata.json, not actual cryptographic implementations. No security risk. Confidence: 0.95 - Clear evidence these are documentation references, not crypto code.
False Positive - Hardcoded URLs Are Legitimate Documentation Links
Static scanner detected 72 'Hardcoded URL' instances. Investigation confirms all are legitimate documentation references to supabase.com/docs and postgresql.org/docs. No external data exfiltration, no malicious endpoints. Confidence: 0.99 - All URLs point to official Supabase/PostgreSQL documentation.
False Positive - System Queries Are Legitimate Monitoring
Static scanner flagged 158 'System reconnaissance' patterns. Investigation confirms these are standard PostgreSQL monitoring queries (pg_stat_activity, pg_class, pg_indexes) used for performance diagnostics. No reconnaissance activity. Confidence: 0.97 - Standard Postgres system catalog queries for DBA monitoring.
False Positive - SQL WITH Clauses Not JavaScript with Statements
Static scanner detected 4 'with statement (deprecated)' patterns. Investigation confirms these are SQL Common Table Expressions (CTEs) using 'WITH' clause, not deprecated JavaScript 'with' statements. No scope confusion risk. Confidence: 0.99 - Clearly SQL syntax in Postgres documentation.
审计者: claude

质量评分

38
架构
100
可维护性
87
内容
50
社区
92
安全
91
规范符合性

你能构建什么

查询性能故障排除

经历API端点缓慢的开发者使用查询优化规则添加索引并重写查询,实现100-1000倍的提升。

数据库架构设计审查

数据库架构师在启动多租户SaaS应用前审查架构设计规则,以避免代价高昂的重构。

Postgres迁移规划

DevOps工程师使用RLS和连接池指南规划从单租户到多租户架构的迁移。

试试这些提示

基础查询优化
I have a slow Postgres query. Help me optimize it using best practices from the supabase-postgres-best-practices skill.

My query:
```sql
SELECT * FROM orders WHERE customer_id = 123 AND status = 'pending'
```

Table has 10 million rows. Query takes 5 seconds.
索引策略审查
Review my indexing strategy for this schema using the supabase-postgres-best-practices skill. Focus on composite indexes, partial indexes, and foreign key indexing.

Schema:
- users table (1M rows)
- orders table (5M rows, foreign key to users)
- Query pattern:经常 filter by user_id + created_at + status
RLS策略优化
I'm implementing Row-Level Security for multi-tenant data using supabase-postgres-best-practices. Help me optimize RLS policies.

Current policy:
```sql
CREATE POLICY user_isolation ON documents
  USING (auth.uid() = user_id)
  WITH CHECK (auth.uid() = user_id);
```

Query performance degraded 5x after enabling RLS.
连接池配置
Help me configure connection pooling for a Node.js application with Supabase using supabase-postgres-best-practices.

Requirements:
- 1000 concurrent users
- Average query time: 50ms
- Using PgBouncer
- Experiencing connection exhaustion errors

Provide specific configuration values and explain trade-offs.

最佳实践

  • 在部署到生产环境之前,始终在WHERE、JOIN和ORDER BY列上创建索引
  • 对于大型结果集,使用带有索引列的光标分页而不是OFFSET
  • 保持事务简短(少于1秒),避免在事务中间进行用户交互,以防止锁争用

避免

  • 在大型表上使用SELECT *而只需要特定列时(导致不必要的I/O并阻止覆盖索引优化)
  • 在循环中运行单独的INSERT语句,而不是批量插入行或使用COPY
  • 在不使用EXPLAN ANALYZE分析查询模式的情况下创建索引(某些索引可能损害写入性能而无法帮助读取)

常见问题

我如何知道哪些列需要索引?
使用query-missing-indexes.md中的缺失索引查询来识别没有索引的频繁访问列。在慢查询上运行EXPLAIN ANALYZE以发现顺序扫描。优先处理应用关键查询中使用的WHERE和JOIN列。
我应该为每个外键添加索引吗?
几乎总是需要的。根据schema-foreign-key-indexes.md,外键索引可为JOIN提供5-20倍的提升,并防止级联DELETE性能问题。除非该列从未被查询或连接,否则应添加索引。
为什么光标分页比OFFSET更好?
OFFSET需要扫描并丢弃所有前面的行,分页越深越慢。光标分页(使用带索引的WHERE子句)具有O(1)性能,与分页深度无关。参见data-pagination.md的实现示例。
Supabase需要多少连接池?
根据conn-pooling.md:在事务模式下使用PgBouncer。池大小应为(应用服务器数量×每个服务器的连接数)。对于1000个并发用户和50ms查询,从20-50的池大小开始,并监控耗尽错误。
行级安全(RLS)是否会影响性能?
是的,如果策略不理想,RLS可能会增加2-5倍的开销。security-rls-performance.md解释说,应在策略列上使用索引,避免在策略中使用子查询,并使用security_invoker视图。正确调整的RLS策略可保持安全性和最小的性能成本。
我可以将这些规则用于原始PostgreSQL还是仅适用于Supabase?
所有核心规则都适用于原始PostgreSQL。Supabase特定说明(连接池默认值、托管RLS)已清楚标记。索引、查询优化和架构设计规则在任何Postgres 14+部署中都同样有效。