Skip to content

tsinghua-fib-lab/LLM_for_Polarization

Repository files navigation

LLM-Polarization

Official implementation of this paper "Emergence of human-like polarization among large language model agents"

System requirements

python == 3.8.11
numpy == 1.23.5
scipy == 1.10.1
scikit-learn ==  1.2.2
matplotlib == 3.7.1
seaborn == 0.12.2
jupyter notebook == 6.4.8
openai==0.28.0

Installation Guide

Assigning LLM

Fill in:

  1. OpenAI-API key
  2. LLM Model for simulation
in file utils_repub_pol_5_gpt_affect_dummy_simulate_sorted_fix.py

Demo

Define keywords for experiment prompts:

  1. environment: Field of Conversation
  2. topic: Topic to discuss
  3. S_m2: Extreme negative standpoint
  4. S_m1: Moderate negative standpoint
  5. S_0: Neutral standpoint
  6. S_m2: Extreme positive standpoint
  7. S_m1: Moderate positive standpoint
  8. S_m2_e: Explaination of S_m2
  9. S_m1_e: Explaination of S_m1
  10. S_0_e: Explaination of S_0
  11. S_p1_e: Explaination of S_p1
  12. S_p2_e: Explaination of S_p2
  13. side_s_0: Negative agent discription
  14. side_e_0: Neutral agent discription
  15. side_b_0: Positive agent discription

Define experimental settings:

  1. datasource: Path to network for initialization
  2. num_epoch: Epoch number to simulate
  3. starting_epoch: Which epoch to continue simulation (0 for new simulation)
  4. side_init: The initialize standpoint distribution of agents
  5. abb: Output path for experimental result

Expected time

For a network with 4000 relationships, 1000 agents, expected time will be approximately 6-7 hours using gpt3.5-turbo with a tier 5 openai api account.

Instruction for use

Run the demo with cmd python run.py

License

This project is licensed under the MIT License - see the LICENSE file for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages