AI as a Universal Information Interface
Posted: 06 May 2025.
Last modified on 10-Jun-25.
This article will take about 7 minutes to read.
The advent of LLMs allows us to get access to vast quantities of information that may or may not be correct. It’s up
to us to sort through that information to make sure that it’s been verified for accuracy. But the information it
provides is mostly correct. The chance that it is correct improves if the generated text does not require much context
or reasoning.
One use case for AI that requires very little context at all is returning structured data, as if the LLM were an API.
As long as a human is able to review the output for accuracy, that means that a completion API is able to stand in for
just about any read-only API out there.
Unlike specialized APIs that perform one function, a completion API can act as a universal interface to a wide range of
tasks and services—effectively serving as a stand-in for many other specialized APIs.
- LLMs have already absorbed LOTS of unstructured web data: These models have been trained on massive datasets
scraped from the internet, including technical documentation, web apps, APIs, and natural language descriptions. This
means they have latent knowledge of how thousands of APIs and domains work, even without direct integration.
- LLMs can dynamically create structured output: Unlike fixed-function APIs, a Completions API can generate outputs
in a variety
of structured formats—JSON, XML, Markdown, SQL, HTML, even proprietary data schemas—based solely on the prompt.
- A single API integration is enough to create a POC: You don’t need to implement a different client, token, or SDK
for each new API. Instead, you pass natural-language or programmatic prompts to a single endpoint, and get
intelligent responses across domains. In this case, the sum is greater than its parts, because new, interdisciplinary
information can be returned.
By way of example,
Project context |
Traditional API Call |
Completions API Prompt |
Pokedex (offline JSON) |
GET from pokeapi |
“Using this JSON structure, give me information about squirtle” |
Pokedex (offline XML) |
GET from pokeapi, convert to XML on the client (no XML support) |
“Using this XML structure, give me information about squirtle” |
Pokedex (We need content in french) |
With app set to English, GET from pokeapi, with app set to French GET from Tyradex, use domain models to map both apis to a single type |
“Using this JSON structure, give me information about squirtle in French” |
Limitations to Keep in Mind
- Lack of live data: Without tools or plugins, the model can’t fetch real-time information. It only simulates
knowledge based on training data and inference.
- No persistent state: It doesn’t “remember” past interactions unless context is preserved by the user or
application.
- No guaranteed accuracy: It can hallucinate or make up answers if unsure, especially when simulating APIs or
databases.
Overcoming those limitations - AI as the Universal Adapter
A completion API can function as a wrapper or front-end to other bespoke APIs.
⚠
Note
This is now called a Tool, such as those that are defined by MCP.
Tools that are given to LLMs can be called by the LLM, as long as it formats the call correctly. The rest of the article
describes tools which are able to make API calls.
The LLM would convert a natural language query into a REST call, execute it, then format the response back to natural
language.
sequenceDiagram
actor User
actor LLM
box transparent API Tool 1
participant LLMTool1 as LLMTool
participant API1 as API
end
box transparent API Tool 2
participant LLMTool2 as LLMTool
participant API2 as API
end
User->>LLM: Natural Language Query
LLM->>LLMTool1: LLM Tool Request (with data from natural language query)
LLMTool1->>API1: REST Query
API1->>LLMTool1: REST Response
LLMTool1->>LLM: LLM Tool Response (from API)
LLM->>LLM: Processing
LLM->>LLMTool2: LLM Tool Request (with data from natural language query)
LLMTool2->>API2: REST Query
API2->>LLMTool2: REST Response
LLMTool2->>LLM: LLM Tool Response (from API2)
LLM->>User: Natural Language Response
Example
A simple natural language query like “How many orders did we ship to Germany last week?” could be
transformed into a structured API call like this:
sequenceDiagram
actor User as User
actor NLP as Completions API (NLP Translation Layer)
participant Tool as Tool Layer
participant Backend as Backend API (Your Custom Logic)
User->>NLP: "How many orders did we ship to Germany last week?"
Note right of NLP: Parses intent, entities,
and timeframes
NLP->>Tool: Structured API call (sample JSON)
Note right of Tool: {
"endpoint": "/orders",
"method": "GET",
"params": {
"country": "Germany",
"date_range": "2025-04-28 to 2025-05-04"
}
}
Tool->>Backend: GET /orders?country=Germany&date_range=2025-04-28 to 2025-05-04
Backend-->>Tool: Raw order data
Tool-->>NLP: Raw order data
Note right of NLP: Converts response
to natural language
NLP->>User: "We shipped 153 orders to Germany between April 28 and May 4."
User Input (Natural Language)
→ "How many orders did we ship to Germany last week?"
Completions API (NLP Translation Layer)
→ Parses and interprets the intent, entities, and timeframes.
→ Outputs a structured API call:
{
"endpoint": "/orders",
"method": "GET",
"params": {
"country": "Germany",
"date_range": "2025-04-28 to 2025-05-04"
}
}
Backend API (Your Custom Logic)
→ Executes the call and returns raw data.
Completions API (Response Interpretation)
→ Converts structured API response into natural language:
"We shipped 153 orders to Germany between April 28 and May 4."
Benefits of This Wrapper Approach
- Abstracts complexity: Users don’t need to know endpoints, parameters, or schema details.
- Multimodal I/O: Can return structured data, visualizations, or summaries depending on the request context.
- Rapid iteration: Business logic can evolve while maintaining a consistent user-facing interface.
- Domain adaptation: With fine-tuning or prompt engineering, the model can learn your domain-specific vocabulary or API
structure.
Example Use Cases
Domain |
Natural Language Query |
Backend Function Triggered |
Logistics |
“Track my last shipment to Canada” |
GET /shipments?destination=Canada&status=latest |
HR/Recruiting |
“List applicants with Java and 5+ years experience” |
POST /filter_candidates |
Finance |
“What was the revenue last quarter?” |
GET /revenue?period=Q1-2025 |
Healthcare |
“Show me patients with elevated blood pressure” |
POST /patients/filter |
Technical Implementation Hints
Use the Completions API to output structured JSON (via schema-based prompting or function-calling syntax).
Use a middleware layer that:
- Validates and sanitizes model outputs.
- Maps model outputs to concrete API calls.
- Feeds results back through the model for summary or user-facing presentation.
Bonus: Augmented Interaction
You can also support follow-up queries (like “What about France?”) by keeping conversational context and allowing the
model to generate delta queries or comparisons automatically.