缓慢的查询和糟糕的架构设计困扰着Postgres应用。本技能提供来自Supabase专家的26条优先规则,可实现10-1000倍的性能提升。
下载技能 ZIP
在 Claude 中上传
前往 设置 → 功能 → 技能 → 上传技能
开启并开始使用
测试它
正在使用“supabase-postgres-best-practices”。 Review my Postgres query and suggest optimizations based on best practices: SELECT * FROM events WHERE user_id = 123 ORDER BY created_at DESC LIMIT 20
预期结果:
Based on query-missing-indexes.md rules, your query needs an index on the filter and order columns:
1. Add composite index:
```sql
CREATE INDEX events_user_id_created_at_idx
ON events (user_id, created_at DESC);
```
2. Expected improvement: Index scan reduces cost from 25,000 to 8.44 (100-1000x faster on large tables).
3. For LIMIT queries, consider usingcovering indexes to avoid table lookups if you only need specific columns.
正在使用“supabase-postgres-best-practices”。 Is this batch insert pattern following best practices? for (let i = 0; i < 1000; i++) { await db.query('INSERT INTO logs (message) VALUES ($1)', [logs[i]]); }
预期结果:
No, this violates data-batch-inserts.md rules. You have:
- 1000 individual round trips
- 1000 separate transactions
- Estimated 10-50x slower than necessary
Correct approach (from rules):
```sql
INSERT INTO logs (message) VALUES
($1), ($2), ($3), ... up to ~1000 rows;
```
Or use COPY for bulk imports (fastest method). See data-batch-inserts.md for examples.
正在使用“supabase-postgres-best-practices”。 Should I add an index on this foreign key column that gets queried occasionally?
预期结果:
Yes, absolutely. From schema-foreign-key-indexes.md:
**Impact: CRITICAL (5-20x improvement)**
Foreign key columns should almost always be indexed because:
1. JOINs on foreign keys cause sequential scans without indexes
2. DELETEs on parent tables require scanning child tables
3. Cascade operations become exponentially slower on large tables
The rule notes: Even occasional queries benefit from indexing foreign keys due to exponential scan costs.
安全审计
安全All 710 static findings are false positives. This is a documentation-only skill containing Postgres best practices in Markdown format. The flagged patterns (backticks, MD5 references, URLs, system queries) are all legitimate SQL examples, documentation links, and monitoring queries. No executable code, no data exfiltration, no malicious intent detected.
中风险问题 (1)
低风险问题 (4)
风险因素
质量评分
你能构建什么
查询性能故障排除
经历API端点缓慢的开发者使用查询优化规则添加索引并重写查询,实现100-1000倍的提升。
数据库架构设计审查
数据库架构师在启动多租户SaaS应用前审查架构设计规则,以避免代价高昂的重构。
Postgres迁移规划
DevOps工程师使用RLS和连接池指南规划从单租户到多租户架构的迁移。
试试这些提示
I have a slow Postgres query. Help me optimize it using best practices from the supabase-postgres-best-practices skill. My query: ```sql SELECT * FROM orders WHERE customer_id = 123 AND status = 'pending' ``` Table has 10 million rows. Query takes 5 seconds.
Review my indexing strategy for this schema using the supabase-postgres-best-practices skill. Focus on composite indexes, partial indexes, and foreign key indexing. Schema: - users table (1M rows) - orders table (5M rows, foreign key to users) - Query pattern:经常 filter by user_id + created_at + status
I'm implementing Row-Level Security for multi-tenant data using supabase-postgres-best-practices. Help me optimize RLS policies. Current policy: ```sql CREATE POLICY user_isolation ON documents USING (auth.uid() = user_id) WITH CHECK (auth.uid() = user_id); ``` Query performance degraded 5x after enabling RLS.
Help me configure connection pooling for a Node.js application with Supabase using supabase-postgres-best-practices. Requirements: - 1000 concurrent users - Average query time: 50ms - Using PgBouncer - Experiencing connection exhaustion errors Provide specific configuration values and explain trade-offs.
最佳实践
- 在部署到生产环境之前,始终在WHERE、JOIN和ORDER BY列上创建索引
- 对于大型结果集,使用带有索引列的光标分页而不是OFFSET
- 保持事务简短(少于1秒),避免在事务中间进行用户交互,以防止锁争用
避免
- 在大型表上使用SELECT *而只需要特定列时(导致不必要的I/O并阻止覆盖索引优化)
- 在循环中运行单独的INSERT语句,而不是批量插入行或使用COPY
- 在不使用EXPLAN ANALYZE分析查询模式的情况下创建索引(某些索引可能损害写入性能而无法帮助读取)