- 📚 The Unreasonable Effectiveness of RNNs
- 🎨 Four Experiments in Handwriting with a Neural Network (Drawing)
- 📖 10 things artificial intelligence did in 2018 by Janelle Shane (Text)
- 📖 Writing with the Machine
among the reasons I use large pre-trained language models sparingly in my computer-generated poetry practice is that being able to know whose voices I'm speaking with is... actually important, as is being understanding how the output came to have its shape - @aparrish, full thread
- 📚 Watch an A.I. Learn to Write by Reading Nothing but ____ by Aatish Bhatia
- 📚 Attention is All You Need - Original "Transformer" paper from 2017, also Neural Machine Translation by Jointly Learning to Align and Translate -- Attention paper from 2014
- 📚 What Are Transformer Models and How Do They Work?
- 🎥 How large language models work, a visual intro to transformers by 3Blue1Brown
- 🎥 Intro to Large Language Models by Andrej Karpathy and Intro to LLMs slides
- 📖 Language Models Can Only Write Ransom Notes by Allison Parrish
- 🦙 LLaMA: Open and Efficient Foundation Language Models
- 🦙 The Llama 3 Herd of Models
- 🦜 On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜
- 🔍 The Foundation Model Transparency Index
- 📖 Generative AI’s Illusory Case for Fair Use by Jacqueline Charlesworth
- 🔢 Common Crawl
- 🔢 The Pile
- 🎥 Workflow: Terminal, Shell, Node.js, VSCode
- 🎥 How to Set Up a Node.js Project
- 🎥 Server Side / Express with node.js
- 🎥 HTTP "POST" request with fetch
- 💻 Hello World node.js + express + p5 example
- 🎨 Hello World p5.js + Replicate web app
- ⌨️ Streaming results from Replicate model to p5.js
- 💬 ChatBot Conversations with Llama via Replicate. This follows the specification in the Llama 3 Model Card.
- 💻 Transformers.js LLM examples
- WebAI Summit Transformers.js Slides - Thank you @xenova!
- 📚 Transformers.js Documentation
- 📰 Transformers.js v3: WebGPU Support, New Models & Tasks, and More…
- Read Language models can only write ransom notes by Allison Parrish and review the The Foundation Model Transparency Index. What questions arise for you about using LLMs in your work at ITP?
- Experiment with prompting a large language model in some way other than a provided interface (e.g. ChatGPT) and document the results in a blog post. Consider how working with an LLM compares to generating text from the other methods including but not limited to markov chains and context free grammars. Here are some options:
- Run any of the code examples above Try adjusting the prompts, interaction, or visual design.
- Sign up for the OpenAI API and try creating a "custom assistant" with a system prompt and knowledge base.
- Try running Llama locally with Ollama. Compare and contrast different models.
- Can you connect an LLM to a Discord Bot!?!
- Invent your own idea!
- Joyce Local Ollama try-out
(Please note you are welcome to post under a pseudonym and/or password protect your published assignment. For NYU blogs, privacy options are covered in the NYU Wordpress Knowledge Base. Finally, if you prefer not to post your assignment at all here, you may email the submission.)
- Joyce Local Ollama try-out
- Michal lora-trainer
- Sean MoMA Guide LLM Discord Bot
- Caroline Rethinking Hate Comments
- Cara Koala again
- Wallis LLaVA Vision test