date
Dec 12, 2024
type
Page
status
Published
slug
summary
Running AI agent with external tools and resources
tags
Tutorial
Teaching
ComfyUI
UCL
category
Knowledge
icon
password
URL
 

1. The available external tools and resources in LLM party

comfyui llm party has provided various toolkits could easily integrated with LLM model.
  • Networking: allow the LLM to connect to internet to search real-time information.
notion image
  • knowbase: Allow llm to load specific local file or
notion image
  • Utility: allow llm to get specific real time information, such as weather, time, location, etc..
notion image
 

2. Start your AI agent with extensible tools in comfyui.

You could download bellow workflow as an example
notion image
notion image
notion image
notion image
 
This workflow has integrated with some common tool (online search, load local document, get time and weather, etc…).
However, many tools would need to setup api-key (searching, weather..) or download external models (embedding long local files or online content).
Let’s start with ask AI with searching in comfyui.
 

2.1 Get ready for you google searching API key.

For using google search, you need to acquire your own google_api_key and google_CSE_ID. You would have 100 free request per day for google searching API.
 
notion image

Get your google api key

You could create your google_api_key via here.
notion image
Click get a key.
notion image
Click a new project if you do not have project yet, then click next.
notion image
Then click SHOW KEY.
notion image
Then copy and save the key.
notion image
Past it in comfyui.

Get your google_CSE_ID

Now you could start to get your google_CSE_ID here.
notion image
Add a project if you do not have yet.
notion image
Chose “search the entire web” and click create.
notion image
You are almost done.
notion image
Now you get your search engine ID.
notion image
Paste in comfyui, then you are ready now.

2.2 Asking AI with searching online.

notion image
 
notion image
Here would would take an example about use bio-intelligence of mycelium to design future London. Here are sever key points:
  • Make sure the city you set in time and weather is same within your questions.
  • Tell AI clearly what tool you want it to use when answers your question.
notion image
Then we can see the LLM answered our question after searching online for more reliable information.
 

3.Advance skills

3.1 Skips some nodes in Comfyui.

When you have a lot nodes in comfyui, some time you would like to skips some of them for testing purpose. There are two ways to do that.
 
For single node:
notion image
notion image
Right click single node to by pass it.
 
For a lot of nodes:
notion image
notion image
notion image
You need to add a group, then drag the corner of the group to include all the nodes you want to skip running.
notion image
notion image
Then you can hide the group by right click and select “set group to never.”
 

3.2 Continuously chatting

notion image
If you enable the is_memory, the LLM will be able to continue to talk with the context of previous content, which will help you to dive into deeper talk.
 

3.3 self-evolution AI Agent Architect

You could setup 2 AIs for acting as different role, then they could keep self-improving by talking with each other, try with bellow workflow.
notion image
Here are the interface for this self-evolution AI Agent Architect.
notion image
There few options you need to set before start a new dialog:
  • set the is_reload in Starting Dialog as true.
  • Set the is_memory in both API LLM general link to disable to empty the dialog history from last topic.
notion image
After your run first round of the talking, change previous options to reverse setting, to allow the dialog to continue with previous context.
 

4.Further Thinking

4.1 Connect the text thinking with AI generation ability.

💡
Could you connect this dialog with the previous agent with image generation ability?

4.2 About feeding long content into LLM.

For short text, feed content directly into the LLM as a user prompt.
For long content, embed it first to enable efficient LLM processing. Configure your local embedding model path and vector database base path.
It is a bit complicate to include it in the lecture. If you are interested for that, please help yourself to search it online.
 
Loading...