紧缩问题聊天引擎
- class llama_index.chat_engine.condense_question.CondenseQuestionChatEngine(query_engine: BaseQueryEngine, condense_question_prompt: Prompt, chat_history: List[Tuple[str, str]], service_context: ServiceContext, verbose: bool = False)
Condense Question Chat Engine.
First generate a standalone question from conversation context and last message, then query the query engine for a response.
- async achat(message: str) Union[Response, StreamingResponse]
Async version of main chat interface.
- chat(message: str) Union[Response, StreamingResponse]
Main chat interface.
- chat_repl() None
Enter interactive chat REPL.
- classmethod from_defaults(query_engine: BaseQueryEngine, condense_question_prompt: Optional[Prompt] = None, chat_history: Optional[List[Tuple[str, str]]] = None, service_context: Optional[ServiceContext] = None, verbose: bool = False, **kwargs: Any) CondenseQuestionChatEngine
Initialize a CondenseQuestionChatEngine from default parameters.
- reset() None
Reset conversation state.