Français

Why the portal, API, and SDK share the same engine

Offering more ways in is not enough: what matters is one execution logic behind the portal, the API, and the SDKs.

Subspace

When a product offers several ways to use it, it is easy to assume those are simply separate interfaces side by side.

A portal for business users.
An API for developers.
An SDK for integrations.

From the outside, that can look like three different doors to three different experiences.

But in many cases, the real trouble starts right there.

Because when each interface sits on different logic, costs come back quickly:

  • duplication
  • divergence
  • heavier maintenance
  • less consistent results
  • more fragile integrations
  • harder evolution

In other words, more entry points does not automatically create coherence.

It can multiply fragmentation instead.

The real goal is not “many interfaces”

The real goal is many ways to reach the same logic, without reimplementing it for every context.

In many environments, the same capability ends up in several shapes:

  • one version visible in the UI
  • one version exposed by an API
  • one version tailored to a script
  • sometimes another rebuilt in a neighboring system

Once those variants multiply, cost rises.

You must maintain.
You must verify.
You must test several paths.
You must reconcile results.
You must keep everything moving together.

So the problem is not only opening more access.

The problem is keeping a single execution truth behind that access.

A different approach

A stronger architecture flips the mental model.

Instead of thinking:

  • a portal with its own logic
  • an API with its own logic
  • an SDK with its own logic

you think:

  • one shared engine
  • several ways to reach the same engine

That sounds small.

It deeply changes how the system can evolve.

When the engine is shared:

  • logic is less scattered
  • behavior stays more consistent
  • changes propagate more easily
  • integrations get cleaner
  • maintenance drops

What that means for Subspace

In Subspace, what matters is not only that there are:

  • a portal
  • an API
  • SDKs

What matters is that they sit on the same execution logic.

So a model does not need to be redefined differently depending on whether it is used:

  • from a UI
  • from a backend
  • from a Python script
  • from a TypeScript application
  • inside a broader workflow

The access path changes. The engine stays the same.

That structure is what makes Subspace more than a loose bundle of separate tools.

Diagram: one engine, several doors

The portal, REST API, Python SDK, and TypeScript SDK converge on the same SP Model (JSON); the Subspace engine then produces results.

These are not three separate products: it is one engine, surfaced for each context. The hero screenshot (portal spreadsheet view) illustrates the same idea: define the model and run the calculation against the same spec you would send through the API or SDK below.

Minimal example : HTTP API

Same idea as in the portal: an SP Model (scenarios, steps, variables) handed to the engine via POST /simulate (see the developer documentation for full detail).

curl -sS -X POST "https://api.subspacecomputing.com/simulate" \
  -H "Content-Type: application/json" \
  -H "X-API-Key: be_live_..." \
  -d '{"scenarios":1000,"steps":12,"variables":[{"name":"taux","dist":"uniform","params":{"min":0.03,"max":0.07},"per":"scenario"},{"name":"capital","init":1000,"formula":"capital[t-1] * (1 + taux)"}]}'

Minimal example : Python SDK

from subspacecomputing import BSCE
 
client = BSCE(api_key="be_live_...")
result = client.simulate({
    "scenarios": 1000,
    "steps": 12,
    "variables": [
        {"name": "taux", "dist": "uniform", "params": {"min": 0.03, "max": 0.07}, "per": "scenario"},
        {"name": "capital", "init": 1000.0, "formula": "capital[t-1] * (1 + taux)"},
    ],
})

The spec is the same logical object you can run from the portal or from curl.

Why this matters

When interfaces truly share one engine, several benefits show up.

1. More consistency

Model behavior stays aligned no matter how you call it.

2. Less duplication

You do not rebuild the logic for every context.

3. Less maintenance

When logic changes, you do not patch parallel implementations.

4. Better integration

You can connect Subspace to a product, internal portal, backend, or script without recreating an execution foundation.

5. More flexibility

A team can start one way and expand, for example:

  • begin in the portal
  • then automate via the API
  • then embed via an SDK in a product or workflow

Without swapping engines.

What this changes for an organization

This avoids a common pattern: a system that looks simple at first, then fragments as use cases grow.

One need appears. Then another. Then an integration. Then another UI. Then automation.

If every context needs its own logic, the structure quickly costs more than it seems.

If use cases share one base, growth stays much healthier.

The system becomes:

  • more stable
  • more reusable
  • more evolvable
  • easier to sustain over time

The economic angle

This is not only an architecture topic.

It is an economic one.

When many use cases share one logic:

  • re-development goes down
  • maintenance goes down
  • drift between implementations goes down
  • integration becomes more cost-effective
  • evolving the system costs less

Sharing one engine is not only elegant engineering.

It is also a smarter way to keep analytical models alive.

Conclusion

The value of having a portal, an API, and SDKs is not only the number of options.

It is that those options rest on the same execution engine.

That shared base enables:

  • more coherence
  • more reuse
  • less duplication
  • less maintenance
  • better integration across contexts

So the key is not only access.

The key is the unity of the logic behind that access.

That is exactly what gives a platform like Subspace its strength.