-Total Tasks
-Active
-Completed
-Skipped
-Total Skills
-Active
-Draft
-Installed
-Public
-
Total Memories
-
Writes Today
-
Sessions
-
Embeddings
📊 Memory Writes per Day
⚡ Tool Response Time (per minute avg)
|
–
AI Models
Configure embedding, summarizer and skill evolution models
📡 Embedding Model
Vector embedding model for memory search and retrieval
📝 Summarizer Model
LLM for memory summarization, deduplication and analysis
🔧 Skill Evolution
Auto-extract reusable skills from conversation patterns
Skill Dedicated Model
If not configured, the main Summarizer Model above will be used for skill generation. Configure a dedicated model here for higher quality skill output.
✓ Saved
Some changes require restarting the OpenClaw gateway to take effect.
Team Sharing
Share memories, tasks and skills with your team
🚀 Get Started with Team Collaboration
MemOS supports team memory sharing. Choose one of the following options to enable collaboration, or continue using local-only mode.
Enable to share memories, tasks and skills with your team. When disabled, all features work normally in local-only mode.
✓ Saved
Some changes require restarting the OpenClaw gateway to take effect.
General
System status, ports and telemetry
📊 Model Health
Requires restart to take effect
Anonymous usage analytics to help improve the plugin. Only sends tool names, latencies, and version info. No memory content, queries, or personal data is ever sent.
✓ Saved
Team Admin Panel
Manage team members and shared resources
📥 Import OpenClaw Memory
Migrate your existing OpenClaw built-in memories and conversation history into this plugin. The import process uses smart deduplication to avoid duplicates.
Three ways to use:
① Import memories only (fast) — Click "Start Import" to quickly migrate all memory chunks and conversations. No task/skill generation. Suitable when you just need the raw data.
② Import + generate tasks & skills (slow, serial) — After importing memories, enable "Generate Tasks" and/or "Trigger Skill Evolution" below to analyze conversations one by one. This takes longer as each session is processed by LLM sequentially.
③ Import first, generate later (flexible) — Import memories now, then come back anytime to start task/skill generation. You can pause the generation at any point and resume later — it will pick up where you left off, only processing sessions that haven't been handled yet.