Open
Description
After many tries, I always get some errors when trying to optimize an agent workflow made using langgraph or it does not optimize:
- I could only optimize the main function which generates the graph (e.g. generate_report in the example) but when I create a longer graph it would rarely optimize it.
- I cannot train/bundle function of a graph's node directly (ValueError: no signature found for builtin...) => the workaround I found is to move the code of this function into another function which I set it trainable and call it (e.g. plan_node_train and plan_node in the example) => it does not generate an error but seems not to be optimized.
- only the main function generating the graph (not a node) seems to be optimizable (no node, no function from graph's node is optimized)
- optimizing a trace node value does not seem to work.
You can directly test the code here
from langgraph.graph import StateGraph, START, END
from opto.trace import node, bundle
from opto.optimizers import OptoPrime
state_plan = node("Initial plan: Execute Task A, Task B, and Task C.", trainable=True, description="This represents the current plan of the agent.")
@bundle(trainable=True)
def plan_node_train(state: dict):
"""Creates an initial plan."""
global state_plan
state['plan'] = state_plan
return state
#@bundle(trainable=True)
def plan_node(state: dict):
return plan_node_train(state)
# """Creates an initial plan."""
# state['plan'] = "Initial plan: Execute Task A, Task B, and Task C."
# return state
@bundle(trainable=True)
def self_critique_node_train(state: dict):
"""Improves the plan based on self-critique."""
# For illustration, we simply append an improvement comment.
state['plan'] += " -- Improved after self critique."
return state
def self_critique_node(state: dict):
return self_critique_node_train(state)
@bundle(trainable=True)
def finalize_node_train(state: dict):
"""Finalizes the report using the current plan."""
state['final_report'] = state['plan'] + " -- Final Report."
return state['final_report']
def finalize_node(state: dict):
return finalize_node_train(state)
@bundle(trainable=True)
def generate_report():
# Initialize an empty state
initial_state = {}
# Create a simple LangGraph with 3 nodes:
# START -> plan_node -> self_critique_node -> finalize_node -> END
graph_builder = StateGraph(dict)
graph_builder.add_node("plan", plan_node)
graph_builder.add_node("self_critique", self_critique_node)
graph_builder.add_node("finalize", finalize_node)
graph_builder.add_edge(START, "plan")
graph_builder.add_edge("plan", "self_critique")
graph_builder.add_edge("self_critique", "finalize")
graph_builder.add_edge("finalize", END)
# Compile and invoke the graph
graph = graph_builder.compile()
final_state = graph.invoke(initial_state)
filnal_report_str = f"Final Report: {final_state}"
print(filnal_report_str)
return filnal_report_str
parameters = generate_report.parameters() # [state_plan] + generate_report.parameters() + plan_node_train.parameters() + self_critique_node_train.parameters() + finalize_node_train.parameters()
# parameters = [state_plan] # NO OPTIMIZATION
# parameters = plan_node_train.parameters() # NO OPTIMIZATION
optimizer = OptoPrime(parameters) #, memory_size=2)
optimizer.zero_feedback()
report = generate_report()
# Run a dummy backward pass and step to update parameters.
optimizer.backward(report, "The report quality is low, plan and content are far too short and does not align with research pratice")
optimizer.step()
# Print out optimized parameters (for illustration)
for param in optimizer.parameters:
print("Optimized parameter:", param.name, param.data)
Metadata
Assignees
Labels
No labels
Activity