← Back to list

dataql-quick
by adrianolaselva
SQL queries on CSV, JSON, Parquet, Excel files. Query S3, databases, URLs. Built for LLM integration with MCP server and Claude Code skills.
⭐ 0🍴 0📅 Jan 23, 2026
SKILL.md
name: dataql-quick description: Quick data queries and previews. Use when user wants to see contents of a data file, check schema, or do simple filtering on CSV, JSON, or other data files. tools:
- Bash
DataQL Quick Query
For fast data inspection and simple queries using DataQL.
Quick Commands
| Task | Command |
|---|---|
| Preview file | dataql run -f <file> -q "SELECT * FROM <table> LIMIT 5" |
| Count rows | dataql run -f <file> -q "SELECT COUNT(*) FROM <table>" |
| Check schema | dataql run -f <file> -q ".schema <table>" |
| List tables | dataql run -f <file> -q ".tables" |
| Distinct values | dataql run -f <file> -q "SELECT DISTINCT <column> FROM <table>" |
| Filter rows | dataql run -f <file> -q "SELECT * FROM <table> WHERE <condition> LIMIT 10" |
File Naming Convention
- Table name = filename without extension
users.csv-> table name isusersorders.json-> table name isordersdata.parquet-> table name isdata
Examples
Preview a CSV file
dataql run -f users.csv -q "SELECT * FROM users LIMIT 5"
Count records
dataql run -f orders.json -q "SELECT COUNT(*) as total FROM orders"
Check structure
dataql run -f data.parquet -q ".schema data"
Simple filter
dataql run -f products.csv -q "SELECT name, price FROM products WHERE price > 100 LIMIT 10"
Read from stdin
cat data.csv | dataql run -f - -q "SELECT * FROM stdin_data LIMIT 5"
Note: When reading from stdin, the default table name is stdin_data.
Supported Formats
- CSV (with custom delimiter:
-d ";") - JSON (arrays or objects)
- JSONL/NDJSON
- XML
- YAML
- Parquet
- Excel (.xlsx, .xls)
- Avro
- ORC
Output Options
- Default: formatted table
- JSON output: pipe to
jqor use export - CSV export:
-e output.csv -t csv - JSONL export:
-e output.jsonl -t jsonl
Exploratory Statistics
# Get comprehensive statistics for a file
dataql describe -f data.csv
# Describe specific table after loading
dataql run -f data.csv -q ".describe data"
Quick Troubleshooting
| Problem | Solution |
|---|---|
| "column not found" | Check column names with .schema (case sensitive) |
| "table not found" | Table name = filename without extension |
| Too much output | Add LIMIT 10 or LIMIT 5 to query |
| File too large | Use --cache flag to cache imported data |
| Wrong delimiter | Use -d ";" for semicolon-separated files |
Notes
- Always use LIMIT for large files to avoid overwhelming output
- Use
.schemafirst to understand column names and types - For stdin input with non-CSV format:
-i jsonor-i jsonl - Use
--cachefor repeated queries on the same file - Use
-Q(quiet) to suppress progress bar in scripts
Score
Total Score
75/100
Based on repository quality metrics
✓SKILL.md
SKILL.mdファイルが含まれている
+20
✓LICENSE
ライセンスが設定されている
+10
✓説明文
100文字以上の説明がある
+10
○人気
GitHub Stars 100以上
0/15
✓最近の活動
1ヶ月以内に更新
+10
○フォーク
10回以上フォークされている
0/5
✓Issue管理
オープンIssueが50未満
+5
✓言語
プログラミング言語が設定されている
+5
✓タグ
1つ以上のタグが設定されている
+5
Reviews
💬
Reviews coming soon


