Ways to use VoLCA
VoLCA is one engine with several entry points. Start from the surface that matches how you want to work; they all rely on the same loaded databases, methods, and calculation engine. For the shared data model behind those entry points, read Core concepts.
At a glance
Section titled “At a glance”- Hosted product — fastest path when you want to try VoLCA without installing the engine or managing infrastructure.
- Desktop app — local graphical workflow for analysts who want to keep data and computation on their machine.
- Self-hosted binary — standalone
volcaexecutable for teams who want full control over where the engine runs. - CLI — scripted access to searches, inventories, impacts, database loading, methods, and exports.
- REPL — interactive terminal exploration with session state and automatic server management.
- HTTP API — integration surface for applications, pipelines, notebooks, dashboards, and internal tools.
- Python client (
pyvolca) — Pythonic wrapper around an existing VoLCA server, or a helper to launch a local engine from Python. - MCP — connect Claude, Cursor, ChatGPT, or another AI agent to a VoLCA instance.
- Shared concepts — activities, direct exchanges, supply-chain entries, edges, inventories, impacts, and substitutions mean the same thing across all entry points.
Hosted product
Section titled “Hosted product”Use the hosted product if you want the quickest first contact: create an account, start a managed VoLCA instance, and use the browser UI without setting up your own server.
Best for:
- first evaluation and demos;
- teams that want a shared environment;
- users who do not want to install or maintain the engine themselves.
Start here: create an account or read the desktop and hosted path.
Desktop app
Section titled “Desktop app”Use the desktop app if you want a local graphical workflow. It runs on your machine and is the most approachable path for analysts who prefer not to start with command-line tooling.
Best for:
- local exploration of LCA databases;
- analysts who want a UI before integrating Volca elsewhere;
- situations where data should stay on the user’s computer.
Start here: Desktop.
Self-hosted engine binary
Section titled “Self-hosted engine binary”Use the standalone volca binary when you want to run the engine yourself on Linux, macOS, or Windows. The binary can load databases directly, start the HTTP server, expose MCP, and serve the same core features used by the other entry points.
Best for:
- local technical evaluation;
- self-hosted deployments;
- CI jobs, batch scripts, and controlled environments;
- teams that need to choose their own infrastructure.
Start here: Get started or Quick Start.
Use the CLI when you want direct terminal commands for search, inventory, impacts, methods, flow mapping, database management, and exports.
Two modes matter:
- Local mode:
volca --config volca.toml ...loads and queries data directly. - Client mode: add
--urlorVOLCA_URLto call an already-running server.
Start here: CLI overview.
Use the REPL when you are exploring and do not yet know the exact commands you want to script. It gives you an interactive prompt, keeps session state, and can start/stop the local server for you.
Best for:
- exploratory database browsing;
- trying several searches and impact calculations quickly;
- debugging queries before automating them.
Start here: REPL reference.
HTTP API
Section titled “HTTP API”Use the HTTP API when another application needs to call VoLCA. The server exposes JSON endpoints under /api/v1/ for database search, activity details, inventories, impacts, method management, reference data, and server status.
Best for:
- product integrations;
- internal tools and dashboards;
- notebooks and pipelines;
- repeatable workflows that should not depend on terminal parsing.
Start here: API reference. When a server is running, the live Swagger UI is available at /api/v1/docs and the OpenAPI spec at /api/v1/openapi.json.
Python client: pyvolca
Section titled “Python client: pyvolca”Use pyvolca when your workflow is already in Python. The normal path is to connect Client to a hosted or self-hosted VoLCA server. Use download() and Server only when you deliberately want Python to download and launch a local engine process.
Best for:
- notebooks and data-science workflows;
- Python automation around searches, inventories, and impacts;
- integration with pandas, reporting, or existing analysis scripts.
Start here: pyvolca guide.
MCP for AI agents
Section titled “MCP for AI agents”Use MCP when you want an AI assistant to inspect LCA data through controlled tools instead of pasting raw exports into a chat. VoLCA exposes MCP over HTTP at /mcp on the running server.
Best for:
- asking natural-language questions over a database;
- hotspot exploration with an assistant;
- connecting Claude, Cursor, ChatGPT, or another MCP-capable client.
Start here: MCP overview and MCP tools.
Which path should I choose first?
Section titled “Which path should I choose first?”- I just want to see Volca quickly → hosted product or desktop app.
- I am an analyst and prefer a UI → desktop app.
- I am technical and want local control → self-hosted binary + quick start.
- I want to automate from shell scripts → CLI.
- I want to explore interactively → REPL.
- I want to integrate Volca into software → HTTP API.
- I work in Python →
pyvolca. - I want an AI agent to use Volca → MCP.
Mental model
Section titled “Mental model”A typical technical setup looks like this:
LCA databases + methods ↓ VoLCA engine ↓Browser UI / Desktop / CLI / REPL / HTTP API / pyvolca / MCPYou do not need to learn every entry point at once. Pick one surface, confirm that your database and method load correctly, then switch to the entry point that fits your workflow.
If your goal is to build an extraction, decomposition, contribution, or substitution workflow, start with Core concepts, then continue with Supply chain and inventory concepts before choosing API, Python, CLI, or MCP calls.