Advanced NEAR Oracle Nested Inference Tutorial & Agents

This tutorial covers advanced techniques for interacting with the Onchain AI Oracle on the NEAR protocol. Specifically, it demonstrates implementing Nested Inference by executing multiple inference re

Table of Contents

  • Introduction

  • Nested Inference Use Cases

  • Implementation Overview

  • Implementation Steps

    • 1. Modifying onOracleOpenAiLlmResponse

    • 2. Modifying onOracleFunctionResponse

  • Considerations

  • Conclusion

Introduction

In this tutorial, we’ll explore the Nested Inference capabilities of the Onchain AI Oracle. Nested inference enables a smart contract to initiate a second inference request based on the result of a previous inference. This action is atomic, meaning it executes in a single transaction and can extend to more than two requests, creating versatile, chained computations.

Nested Inference Use Cases

Nested inference allows for flexible and multi-step operations within NEAR contracts. Some example applications include:

  • Generating a prompt with an LLM to create AI-Generated Content (AIGC) for NFTs

  • Extracting structured data from a dataset and using it as input to generate visual data

  • Adding transcripts to videos and translating them into multiple languages

Implementation Overview

The goal of this tutorial is to adapt the Prompt contract to support nested inference requests. For this example, we will use AI to generate a Python code snippet on a trending programming topic, using AI computation in NEAR. The main steps include:

  1. Modifying the onOracleOpenAiLlmResponse method to support sequential inference requests.

  2. Modifying the onOracleFunctionResponse method to manage response handling, execute additional requests, or complete the process based on set conditions.

Implementation Steps

1. Modifying onOracleOpenAiLlmResponse

The onOracleOpenAiLlmResponse method initiates a cross-contract call to the Oracle, using the response from a previous call to perform further inference. Here’s the implementation:

onOracleOpenAiLlmResponse({
    runId,
    response,
    errorMessage,
}: {
    runId: number;
    response: openAIResponse;
    errorMessage: string;
}): any {
    this.onlyOracle();
    const run = this.agentRuns.get(runId.toString());

    if (run.responseCount >= run.maxIterations) {
        run.isFinished = true;
        this.agentRuns.set(runId.toString(), run);
        return;
    }

    if (response.content != null) {
        const newMessage = this.createTextMessage("assistant", response.content);
        run.messages.push(newMessage);
        run.responseCount++;
        this.agentRuns.set(runId.toString(), run);
    }

    if (response.functionName != "") {
        const promise = NearPromise.new(this.oracleAddress)
            .functionCall(
                "createFunctionCall",
                JSON.stringify({
                    functionCallbackId: runId,
                    functionType: response.functionName,
                    functionInput: response.functionArguments,
                }),
                BigInt(0),
                THIRTY_TGAS
            );

        return promise.asReturn();
    }

    run.isFinished = true;
    this.agentRuns.set(runId.toString(), run);
}

2. Modifying onOracleFunctionResponse

The onOracleFunctionResponse function handles responses and initiates additional inference requests as needed. Here, createOpenAiLlmCall is used to make repeated calls based on specified conditions, like reaching a maximum number of iterations (maxIterations), ensuring flexible nested inference.

onOracleFunctionResponse({
    runId,
    response,
    errorMessage,
}: {
    runId: number;
    response: string;
    errorMessage: string;
}): NearPromise {
    this.onlyOracle();
    const run = this.agentRuns.get(runId.toString());

    let result = response;
    if (errorMessage != "") {
        result = errorMessage;
    }

    const newMessage = this.createTextMessage("user", result);
    run.messages.push(newMessage);
    run.responseCount++;
    this.agentRuns.set(runId.toString(), run);
    const promise = NearPromise.new(this.oracleAddress)
        .functionCall(
            "createOpenAiLlmCall",
            JSON.stringify({
                promptCallbackID: runId,
                config: this.config,
            }),
            BigInt(0),
            THIRTY_TGAS
        )
    return promise.asReturn();
}

Considerations

  • Gas Fees: Each nested inference call requires enough gas to complete. To avoid interruptions, always estimate gas usage across all intended inference steps.

  • Callback Handling: Each cross-contract call is carefully managed to handle responses and trigger additional inferences as needed. Ensure proper error handling for smoother, fault-tolerant executions.

  • Error Management: Implement error handling across callbacks to manage response failures gracefully.

Last updated