In Managing Side Effects, we built a tiny JavaScript Effect System that treats side effects like database calls as data descriptions rather than executable promises.

In a typical imperative application, business logic and side effects are inextricably linked. For example, when you write await db.checkInventory(...), the runtime immediately reaches out to the database.

Our Effect System works differently. Instead of performing the action, our functions return a description of the action. When our code needs to check inventory, it doesn’t call the database; it returns a plain object instead, which will be executed later by an interpreter. For example:

function checkInventory(order) {
    // Define the side effect, but don't run it yet
    const cmdCheckInventory = () => db.checkInventory(order);
    
    // Define what happens with the result
    const next = (exists) => exists ? Success() : Failure('Out of stock');
    
    return Command(cmdCheckInventory, next);
}

outputs:

{
    type: 'Command',
    cmd: [Function: cmdCheckInventory], // The command waiting to be executed
    next: [Function: next]              // What to do after the command finishes
}

This is all well and good, but this pattern really shines when we have multiple pure functions we compose together. By using an effect pipeline to pass the output of one function to the subsequent one, we generate a syntax tree: a series of Command, Success, or Failure objects.

const processOrderFlow = (order) =>
    effectPipe(
        validateOrder,
        checkInventory,
        () => chargeCreditCard(order),
        (paymentId) => completeOrder(order, paymentId)
    )(order);

Notice that there’s no logic in the above declarative code. The logic is hidden inside the functions, where it belongs, and we can immediately see the steps required to place an order.

Our effect pipeline handles the Success and Failure cases automatically. If a function returns Success, the subsequent function in line will be called. In the case of a Failure, the pipeline terminates.

Look Ma, No Mocks!

In a standard imperative codebase, testing the order flow would require mocking the inventory database and sandboxing the payment gateway API to avoid real charges.

With our Effect System, processOrderFlow is a pure function. It returns a syntax tree that represents the order processing, without actually doing it. We can write a unit test that walks through the logic step by step:

const order = [{ itemId: 'sku-123', quantity: 1, amount: '99.00'}];

const step1 = processOrderFlow(order);

// Verify the intent to check inventory
assert.equal(step1.type, 'Command');
assert.equal(step1.cmd.name, 'cmdCheckInventory');

// Simulate a successful inventory check by calling next with 'true'
const step2 = step1.next(true);

// Verify the intent to charge the card
assert.equal(step2.type, 'Command');
assert.equal(step2.cmd.name, 'cmdChargeCreditCard');

// Simulate a successful payment
const step3 = step2.next('payment-id-123'); 

// Verify the intent to complete the order
assert.equal(step3.type, 'Command');
assert.equal(step3.cmd.name, 'cmdCompleteOrder');

There are three major benefits to this approach:

  • Speed: Tests run almost instantly since there are no network requests or database connections.
  • Determinism: We aren’t testing database operations, which can be tested separately by calling await runEffect(processOrderFlow(order)) against a test database. We are testing the business logic instead.
  • Safety: There is zero risk of accidentally charging a real credit card during a test run.

By separating the description of work from the execution of work, we’ve turned complex, async integration tests into simple, synchronous unit tests, and we’ve effectively tested side effects without invoking any.

An AI-Native Architecture

One of the arguments against functional patterns like our Effect System is the overhead of writing command objects and wrapper functions. However, I think it’s safe to say this architecture is effectively AI-native.

Because the business logic is expressed as data rather than imperative code with hidden state, LLMs can generate, refactor, and test these pipelines with great accuracy. An AI coding assistant doesn’t need to understand a complex database mocking setup to write a test; it just needs to predict the next object in the sequence. I’d argue that choosing an architecture that is easy for both humans and machines to reason about is a competitive advantage.


GitHub Repository: pure-effect


Related: