Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Protocol #1

Open
jmikedupont2 opened this issue Dec 7, 2023 · 3 comments
Open

Protocol #1

jmikedupont2 opened this issue Dec 7, 2023 · 3 comments
Labels
enhancement New feature or request

Comments

@jmikedupont2
Copy link
Member

jmikedupont2 commented Dec 7, 2023

Let's represent the described protocol as a simplified finite state machine (FSM). Here are the states and transitions:

  1. State 1: Initial

    • Initial state where the conversation begins.
    • Transition: Triggered by the Prover's input, moves to State 2.
  2. State 2: LLM Token Generation

    • LLM continuously generates tokens based on the predefined grammar.
    • Transition: Triggered by the Prover issuing a "stop" command, moves to State 3.
  3. State 3: Prover's Input

    • Prover provides input or "proof" for the LLM's generated tokens.
    • Transition: Moves back to State 2 for the LLM to continue generating, or if conversation ends, moves to State 4.
  4. State 4: End

    • Conversation ends.

This FSM captures the cyclic nature of the conversation, with the Prover initiating and stopping the LLM's token generation. It repeats until a conclusive end state.

@jmikedupont2 jmikedupont2 added the enhancement New feature or request label Dec 7, 2023
@jmikedupont2
Copy link
Member Author

Sure, the simplified finite state machine (FSM) you described can be represented as a Python code:

Python

from enum import Enum class State(Enum): INITIAL = 1 LLM_TOKEN_GENERATION = 2 PROVER_INPUT = 3 END = 4 class FSM: def init(self): self.state = State.INITIAL def transition(self, event): if self.state == State.INITIAL: if event == "Prover's input": self.state = State.LLM_TOKEN_GENERATION elif self.state == State.LLM_TOKEN_GENERATION: if event == "Stop command": self.state = State.PROVER_INPUT elif self.state == State.PROVER_INPUT: if event == "Continue": self.state = State.LLM_TOKEN_GENERATION elif event == "End conversation": self.state = State.END def get_state(self): return self.state # Create an instance of the FSM fsm = FSM() # Simulate the conversation events = ["Prover's input", "Stop command", "Continue", "Continue", "Stop command", "End conversation"] for event in events: print(f"Current state: {fsm.get_state()}") fsm.transition(event) print(f"Final state: {fsm.get_state()}")

This code defines an FSM class with the following states and transitions:

StateDescriptionTransitionINITIALInitial stateProver's input -> LLM_TOKEN_GENERATIONLLM_TOKEN_GENERATIONLLM continuously generates tokensStop command -> PROVER_INPUTPROVER_INPUTProver provides input or "proof"Continue -> LLM_TOKEN_GENERATIONENDConversation ends-

The code then creates an instance of the FSM and simulates a conversation by iterating through a list of events.

Here is the output of the code:

Current state: State.INITIAL Current state: State.LLM_TOKEN_GENERATION Current state: State.PROVER_INPUT Current state: State.LLM_TOKEN_GENERATION Current state: State.LLM_TOKEN_GENERATION Current state: State.PROVER_INPUT Final state: State.END

As you can see, the code successfully simulates the cyclic nature of the conversation, with the Prover initiating and stopping the LLM's token generation. The conversation ends when the Prover issues the "End conversation" command.

I hope this helps! Let me know if you have any other questions.

@jmikedupont2
Copy link
Member Author

I think this protocol is a clever way to evaluate the llm's reasoning skills and to challenge the prover's creativity. It reminds me of some zero-knowledge proof protocols, where the prover convinces the verifier of a statement without revealing any extra information ¹².

To implement this protocol, I think we need to define the grammar and the logic rules that the llm and the prover will follow. For example, we can use a propositional logic grammar with symbols like p, q, ->, ~, etc. and rules like modus ponens, modus tollens, etc. Then, the llm can generate tokens based on the grammar, and the prover can provide input based on the logic rules. The prover can also guide the llm logically by asking questions or giving hints.

Of course, this is just one possible way to implement this protocol. There might be other ways to do it, depending on the goals and the assumptions of the protocol. I hope this helps. 😊

Source: Conversation with Bing, 12/7/2023
(1) An efficient and secure two-flow zero-knowledge identification protocol. https://www.degruyter.com/document/doi/10.1515/JMC.2007.010/pdf.
(2) Libra: Succinct Zero-Knowledge Proofs with Optimal Prover ... - IACR. https://eprint.iacr.org/2019/317.pdf.
(3) Zero Knowledge What? An Introduction to Zero Knowledge. https://codethechange.stanford.edu/guides/guide_zk.html.

@jmikedupont2
Copy link
Member Author

Sure, I'm happy to help you with that. 😊

One way to expand or elaborate on this protocol is to consider different scenarios or variations that might affect the outcome of the conversation. For example, you could ask questions like:

  • What if the llm generates tokens that are invalid, inconsistent, or contradictory? How should the prover handle such cases?
  • What if the prover provides input that is incorrect, incomplete, or irrelevant? How should the llm respond or verify the input?
  • What if the llm or the prover has some prior knowledge or assumptions that are not shared by the other? How should they communicate or justify their reasoning?
  • What if the llm or the prover wants to change the topic or the goal of the conversation? How should they signal or negotiate that?

Another way to expand or elaborate on this protocol is to compare and contrast it with other existing protocols or methods that have similar objectives or applications. For example, you could look at some of the following sources that I found using my web search tool:

  • Protocols - PLOS¹: This website provides information and examples of published peer-reviewed protocols for various research fields and topics. It also explains the benefits and challenges of publishing protocols and how to improve reproducibility and transparency of scientific research.
  • How to write a research study protocol²: This article describes the contents and structure of a research study protocol, including the introduction, methods, analysis, ethics, dissemination, and references. It also provides some tips and advice on how to write a clear and concise protocol.
  • Clinical Guidelines and Standardization of Practice to Improve Outcomes³: This document discusses the role and importance of protocols and checklists in clinical practice, especially in obstetrics and gynecology. It also provides some recommendations and resources on how to develop, implement, and evaluate protocols and checklists.

I hope these suggestions and sources are helpful for you. If you have any questions or feedback, please let me know. I'm always eager to learn and improve. 😊

Source: Conversation with Bing, 12/7/2023
(1) Protocols - PLOS. https://plos.org/protocols/.
(2) How to write a research study protocol - Oxford Academic. https://academic.oup.com/jsprm/article/2022/1/snab008/6499877.
(3) Clinical Guidelines and Standardization of Practice to Improve ... - ACOG. https://www.acog.org/clinical/clinical-guidance/committee-opinion/articles/2019/10/clinical-guidelines-and-standardization-of-practice-to-improve-outcomes.
(4) undefined. https://doi.org/10.1093/jsprm/snab008.

Deadsg pushed a commit to Deadsg/llama.cpp that referenced this issue Dec 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant