-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Protocol #1
Comments
Sure, the simplified finite state machine (FSM) you described can be represented as a Python code: Python from enum import Enum class State(Enum): INITIAL = 1 LLM_TOKEN_GENERATION = 2 PROVER_INPUT = 3 END = 4 class FSM: def init(self): self.state = State.INITIAL def transition(self, event): if self.state == State.INITIAL: if event == "Prover's input": self.state = State.LLM_TOKEN_GENERATION elif self.state == State.LLM_TOKEN_GENERATION: if event == "Stop command": self.state = State.PROVER_INPUT elif self.state == State.PROVER_INPUT: if event == "Continue": self.state = State.LLM_TOKEN_GENERATION elif event == "End conversation": self.state = State.END def get_state(self): return self.state # Create an instance of the FSM fsm = FSM() # Simulate the conversation events = ["Prover's input", "Stop command", "Continue", "Continue", "Stop command", "End conversation"] for event in events: print(f"Current state: {fsm.get_state()}") fsm.transition(event) print(f"Final state: {fsm.get_state()}") This code defines an FSM class with the following states and transitions: StateDescriptionTransitionINITIALInitial stateProver's input -> LLM_TOKEN_GENERATIONLLM_TOKEN_GENERATIONLLM continuously generates tokensStop command -> PROVER_INPUTPROVER_INPUTProver provides input or "proof"Continue -> LLM_TOKEN_GENERATIONENDConversation ends- The code then creates an instance of the FSM and simulates a conversation by iterating through a list of events. Here is the output of the code: Current state: State.INITIAL Current state: State.LLM_TOKEN_GENERATION Current state: State.PROVER_INPUT Current state: State.LLM_TOKEN_GENERATION Current state: State.LLM_TOKEN_GENERATION Current state: State.PROVER_INPUT Final state: State.END As you can see, the code successfully simulates the cyclic nature of the conversation, with the Prover initiating and stopping the LLM's token generation. The conversation ends when the Prover issues the "End conversation" command. I hope this helps! Let me know if you have any other questions. |
I think this protocol is a clever way to evaluate the llm's reasoning skills and to challenge the prover's creativity. It reminds me of some zero-knowledge proof protocols, where the prover convinces the verifier of a statement without revealing any extra information ¹². To implement this protocol, I think we need to define the grammar and the logic rules that the llm and the prover will follow. For example, we can use a propositional logic grammar with symbols like Of course, this is just one possible way to implement this protocol. There might be other ways to do it, depending on the goals and the assumptions of the protocol. I hope this helps. 😊 Source: Conversation with Bing, 12/7/2023 |
Sure, I'm happy to help you with that. 😊 One way to expand or elaborate on this protocol is to consider different scenarios or variations that might affect the outcome of the conversation. For example, you could ask questions like:
Another way to expand or elaborate on this protocol is to compare and contrast it with other existing protocols or methods that have similar objectives or applications. For example, you could look at some of the following sources that I found using my web search tool:
I hope these suggestions and sources are helpful for you. If you have any questions or feedback, please let me know. I'm always eager to learn and improve. 😊 Source: Conversation with Bing, 12/7/2023 |
Let's represent the described protocol as a simplified finite state machine (FSM). Here are the states and transitions:
State 1: Initial
State 2: LLM Token Generation
State 3: Prover's Input
State 4: End
This FSM captures the cyclic nature of the conversation, with the Prover initiating and stopping the LLM's token generation. It repeats until a conclusive end state.
The text was updated successfully, but these errors were encountered: