Tool-Augmented LLMs
Tool-Augmented LLMs

What is Tool-Augmented LLMs in AI, code how to implement it

Introduction

Large Language Models (LLMs) are powerful at generating text, but they don’t always have access to real-time or specialized data. Tool-Augmented LLMs bridge this gap by letting the LLM use external tools, APIs, or databases to perform actions beyond their training knowledge.

Think of it as giving the LLM a Swiss Army knife — instead of only predicting text, it can call APIs, run calculations, fetch web data, or interact with databases.

Tool-Augmented LLMs = LLM + External Tools (APIs, databases, search engines, calculators, etc.)
The LLM decides when to call a tool, sends input to it, and then incorporates the tool’s result into its final response.

Why Tool-Augmented LLMs?

  • 🔎 Access real-time data (e.g., stock prices, weather)
  • 🧮 Do calculations (instead of approximating)
  • 🌍 Connect with external systems (databases, APIs, bots)
  • 🤖 Make LLMs more useful in practical applications

Example: Instead of asking an LLM, “What’s the weather in Delhi right now?” (which it can’t know), we let the LLM call a weather API and then generate a natural answer.

Scope

Tool-Augmented LLMs are widely used in:

  • Chatbots → fetch live stock prices, sports scores, etc.
  • AI Assistants → interact with calendars, emails, databases
  • Automation agents → order food, send reminders, or perform business tasks
  • Research tools → search the web for up-to-date knowledge

What We’ll Build

We’ll build a Weather Agent using Gemini Pro 2.5 that:

  1. Takes a city name as input
  2. Calls OpenWeather API (via tool)
  3. Returns a natural-language response

Example Project: Weather Assistant with Tool-Augmented LLM

We’ll create a Node.js app where the LLM uses a weather API tool to answer live weather questions.

Project Setup

mkdir tool-augmented-llm

cd tool-augmented-llm

npm init -y

npm install dotenv axios @langchain/core @langchain/openai @langchain/langgraph

Create a .env file:

OPENAI_API_KEY=your_openai_api_key_here
WEATHER_API_KEY=your_openweathermap_api_key_here

Step 2: LLM with Tool-Augmentation

import dotenv from "dotenv";
dotenv.config();

import { GoogleGenerativeAI } from "@google/generative-ai";
import axios from "axios";

const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);

// Weather Tool (fetch from OpenWeather API)
async function getWeather(city) {
  try {
    const apiKey = process.env.WEATHER_API_KEY;
    const url = `https://api.openweathermap.org/data/2.5/weather?q=${city}&appid=${apiKey}&units=metric`;
    
    const response = await axios.get(url);
    return `The weather in ${city} is ${response.data.weather[0].description} with temperature ${response.data.main.temp}°C.`;
  } catch (error) {
    return `Failed to fetch weather for ${city}. Error: ${error.response?.status} - ${error.response?.data?.message}`;
  }
}


// Ask Gemini and augment with tool
async function askGemini(question) {
    console.log("question: ", question);
  const model = genAI.getGenerativeModel({ model: "gemini-2.5-pro" });

  // simple rule-based tool call
  if (question.toLowerCase().includes("weather")) {
    const city = question.split("in ")[1].replace(/[?.!,]/g, "").trim() || "Delhi";
    console.log("city: ", city);
    const weatherInfo = await getWeather(city);
    console.log("weatherInfo: ", weatherInfo);

    return `According to live data: ${weatherInfo}`;
  }

  // fallback: normal LLM response
  const result = await model.generateContent(question);
  console.log("llm response: ", result);
  return result.response.text();
}

// Run
(async () => {
  const answer = await askGemini("What is the weather in Delhi?");
  console.log(answer);
})();

Run Code

node tool_aug_llm.js

Output

question:  What is the weather in Delhi?
city:  Delhi
weatherInfo:  The weather in Delhi is light rain with temperature 31.54°C.
According to live data: The weather in Delhi is light rain with temperature 31.54°C.

So finally what we acheived

With tool augmentation, the LLM can call external APIs/tools (like a weather API) to fetch real-time data and then use that info to improve its response.

  1. User asks a question → e.g., "What's the weather in Delhi?"
  2. City extractor finds "Delhi".
  3. Weather API (OpenWeatherMap) is called to fetch real-time temperature & conditions.
  4. Gemini Pro 2.5 is given:
    • The original user question
    • The real-time weather data
  5. Gemini generates a natural response like: “Currently, the weather in Paris is clear skies with 28°C. Perfect day for a walk along the Seine!”

Full code is available at github : https://github.com/110059/tool-augmented-llm

Similar Posts