NanoResearch is an AI research engine that actually runs computational experiments—not just generating text outlines or paper drafts. It automates literature search, research idea generation, experimental design, and the writing of executable code. It then submits the code to GPU clusters or runs training locally, collects experimental results, analyzes data, automatically generates paper figures (including architecture diagrams, result comparisons, and ablation study charts), and finally produces a complete LaTeX paper backed by real experimental data. It can even perform automated review and revision.
NanoResearch supports two operational modes: Python CLI and Claude Code. The Claude Code mode requires no external API key, using its built‑in WebSearch, Bash execution, and file read/write capabilities to drive the entire research process. Additionally, it integrates with Feishu (Lark) bots, allowing users to start research tasks and check progress directly from a chat interface, and finally receive the PDF paper. The whole process is highly automated, with each stage supporting resume‑on‑failure, and flexible configuration of different models, ensuring efficiency, reproducibility, and traceability in the research workflow.
Conventional AI research tools mostly generate text outlines or paper drafts. What sets NanoResearch apart is its ability to execute a complete computational experiment pipeline:
Idea Generation → Automatically searches literature, identifies research gaps, formulates hypotheses
↓
Experiment Planning → Designs detailed experimental plans and technical roadmaps
↓
Code Generation → Produces runnable experimental code (including training scripts, data loading, model definitions)
↓
GPU Execution → Automatically submits jobs to SLURM clusters or runs training on local GPUs
↓
Results Analysis → Parses training logs, extracts metrics, generates structured experimental evidence
↓
Figure Generation → Automatically creates paper figures based on actual experimental data
↓
Paper Writing → Writes a LaTeX paper grounded in real experimental results
↓
Review & Revision → Automatically reviews and revises the paper
All data, tables, and figures in the paper come from actual executed experiments.
| Feature | Traditional AI Writing Tools | NanoResearch |
|---|---|---|
| Literature Search | Partial | Automatic retrieval via OpenAlex + Semantic Scholar |
| Experiment Design | Not supported | Automatically generates experimental plans |
| Code Generation | Partial | Fully runnable experimental code |
| Experiment Execution | Not supported | Automatically submits GPU training, supports local and SLURM |
| Results Analysis | Not supported | Parses real training logs and metrics |
| Paper Figures | Not supported | Generates charts based on real data |
| Paper Writing | Outline / Draft | Complete LaTeX paper based on experimental evidence |
| Resume Capability | Not supported | Can resume after failure at any stage |
| Multi‑Model Collaboration | Single model | Different models configurable per stage |
Research Topic
↓
IDEATION → PLANNING → SETUP → CODING → EXECUTION → ANALYSIS → FIGURE_GEN → WRITING → REVIEW
↓
Exported workspace: paper.pdf / paper.tex / references.bib / figures / code / data
| Stage | Function | Description |
|---|---|---|
| IDEATION | Literature Search & Idea Generation | Searches academic literature, identifies research gaps, formulates hypotheses, collects essential references |
| PLANNING | Experiment Design | Translates research ideas into detailed experimental designs |
| SETUP | Environment Preparation | Prepares code repositories, dependencies, models, and datasets |
| CODING | Code Generation | Generates complete runnable experimental projects |
| EXECUTION | Experiment Execution | Runs training on local GPUs or SLURM clusters, supports auto‑retry and debugging |
| ANALYSIS | Results Analysis | Parses training logs and metrics, produces structured experimental evidence |
| FIGURE_GEN | Figure Generation | Creates architecture diagrams, result comparison charts, and ablation study figures |
| WRITING | Paper Writing | Writes LaTeX paper based on experimental evidence and citations |
| REVIEW | Review & Revision | Automatically reviews chapters, detects issues, and revises |
The EXECUTION stage includes capabilities such as automatic SLURM job submission, local GPU execution, auto‑debugging and retry, real‑time log monitoring, and mixed execution modes.
Paper content relies on structured experimental evidence, citations, and experimental artifacts. It tracks essential references and citation quality, and outputs LaTeX papers with BibTeX support.
Writing and figure generation use actual experimental outputs and analysis artifacts. Tables, data statements, and charts are bound to real experimental results, with intermediate JSON artifacts preserved for debugging and auditing.
Architecture diagrams are generated by image models, while result and ablation charts are generated by code based on real data. Figure prompts, scripts, and outputs are all saved in the workspace.
Artifacts from each stage are written to disk, allowing the process to continue from the last checkpoint after a failure without starting over.
git clone https://github.com/OpenRaiser/NanoResearch.git
cd NanoResearch
pip install -e ".[dev]"
Create ~/.nanoresearch/config.json. Replace base_url and api_key with your own OpenAI‑compatible API endpoint, and select available models for each stage.
nanoresearch run --topic "Adaptive Sparse Attention Mechanisms" --dry-run
nanoresearch run --topic "Adaptive Sparse Attention Mechanisms" --format neurips --verbose
nanoresearch resume --workspace ~/.nanoresearch/workspace/research/{session_id} --verbose
nanoresearch export --workspace ~/.nanoresearch/workspace/research/{session_id} --output ./my_paper
NanoResearch can be driven directly through Claude Code without requiring any API key configuration.
# 1. Clone the repository
git clone https://github.com/OpenRaiser/NanoResearch.git
cd NanoResearch
# 2. Start Claude Code
claude
# 3. Run the full pipeline
/project:research "Your Research Topic Here"
| Command | Function |
|---|---|
| /project:research |
Runs the full 9‑stage pipeline |
| /project:ideation |
Literature search + hypothesis generation |
| /project:planning | Experiment design |
| /project:experiment | Environment setup + code generation + experiment execution |
| /project:analysis | Results analysis |
| /project:writing | Figure generation + paper writing |
| /project:review | Multi‑perspective review + revision |
| /project:status | View pipeline status |
| /project:resume | Resume pipeline from checkpoint |
NanoResearch includes built‑in support for Feishu bots, allowing users to trigger pipelines, check progress, and receive papers directly in Feishu chat.
pip install lark-oapi
export FEISHU_APP_ID="cli_xxx"
export FEISHU_APP_SECRET="xxx"
nanoresearch feishu
| Command | Description |
|---|---|
| /run |
Start a research pipeline |
| /status | View task progress |
| /list | List historical research sessions |
| /stop | Stop the current pipeline |
| /export | Export the most recently completed research |
| /new | Reset conversation memory |
| /help | Show help information |
Does the system actually run experiments? Yes. The system generates runnable code that executes on local GPUs or SLURM clusters. All data in the paper comes from real experiments.
Does it support resuming after failure? Yes. The workspace saves checkpoints at each stage. Running the resume command continues from where it left off.
Do I have to configure a different model for each stage? Not necessarily. The system supports stage‑specific model routing, but you can also use the same model for all stages.
Can the generated paper be submitted directly? The output serves as a high‑quality draft. Manual review and revision are recommended before submission.