In this notebook, you're gonna play with some of the largest language models on the Internet.
_Based on works of: Tim Dettmers, Ruslan Svirschevsky, Artem Chumachenko, Younes Belkada, Felix Marty, Yulian Gilyazev, Gosha Zolotov, Andrey Ishutin, Elena Volf, Artemiy Vishnyakov, Svetlana Shirokovskih.
In the assignment, we'll use public APIs that host the 100B+ models for inference. Your task is to prompt-engineer the model into solving a few tasks for you.
Which API? You are free to use any publicly available API for general LM -- as long as it's not a chat assistant. So, gpt 3.5 is fine, but chatGPT is not. Here's a few options:
These APIs may require you to create a (free) account on their platform. Please note that some APIs also have paid subscriptions. You do not need to pay them, this assignment was designed to be solved using free-tier subscriptions. If no APIs work for you, you can also solve these tasks with the 6.7B model that you will find later in this notebook - but this will make the tasks somewhat harder.
Quests: you will need to solve 4 problems. For each one, please attach a short description of your solution and a screenshot from the API you use. [If you use python APIs, show your python code with outputs]
Example: Tony is talking to Darth Vader (BLOOM API). Black text is written manually, blue text is generated.
It is fine to roll back a few times, e.g. in the example above, the model first generated Vader lines twice in a row, and we rolled that back. However, if you need more than 1-2 rollbacks per session, you should probably try a different prompt.
Task 1 (1 pt): arange a conversation between any two of the following:
Compare two setups: a) you prompt with character names only b) you supply additional information (see example).
# <your code OR writeup with screenshots>
Please choose task 2a or 2b (1pt) depending on your model (you can do both, but you will be awarded points for one of these two tasks).
Task 2a: (for BLOOM or other multilingual model) zero-shot translation. Take the first verse of Edgar Allan Poe's "Raven" and translate it into French. (You are free to use any other text of at least the same size)
Original text: ``` Once upon a midnight dreary, while I pondered, weak and weary, Over many a quaint and curious volume of forgotten lore— While I nodded, nearly napping, suddenly there came a tapping, As of some one gently rapping, rapping at my chamber door. “’Tis some visitor,” I muttered, “tapping at my chamber door— Only this and nothing more.”
Verify your translation by converting french back into english using a public machine translation service.
__Task 2b: (non-BLOOM):__ toxicity classification for [SetFit/toxic_conversations](https://huggingface.co/datasets/SetFit/toxic_conversations). Make the model solve binary classification (toxic vs not toxic) in the few shot mode. For few-shot examples, use 2-3 toxic and 2-3 non-toxic non-toxic examples. Measure accuracy on at least 25 samples. You may need to try several different prompts before you find the one that works.
# <your code OR writeup with screenshots>
Task 3 (1pt): create a prompt and few-shot examples tha make the model change the gender pronouns of the main actor in a given sentence in any direction of your choice. E.g. the doctor took off his mask <-> the doctor took of her mask.
# <your code OR writeup with screenshots>
Task 4 (1pt): write a prompt and supply examples such that the model would convert imperial units to metric units (miles -> kilometers; mph -> kph). More specifically, the model should rewrite a given sentence and replace all imperial units with their metric equivalents. After it works with basic distances and speed, try to find complicated examples where it does not work.
Please note that 1 mile is not equal to 1 km :)
# <your code OR writeup with screenshots>
Now, let's try and load the strongest model that can fit a typical Colab GPU (T4 with 16 GB as of spring 2023).
Our best candidates are the smaller versions of the best performing open source models:
Beware: while these models are smaller than the ones in API, they're still over 60x larger than the BERT we played with last time. The code below will just barely fit into memory, so make sure you don't have anything else loaded. Sometimes you may need to restart runtime for the code to work.
It's a good time to restart your kernel and switch to GPU! (Runtime -> Change runtime type)
%pip install --quiet bitsandbytes==0.41.1 transformers==4.34.1 accelerate==0.24.0 sentencepiece==0.1.99 optimum==1.13.2 auto-gptq==0.4.2
import torch
import torch.nn as nn
import torch.nn.functional as F
import transformers
import bitsandbytes as bnb
from tqdm.auto import tqdm, trange
assert torch.cuda.is_available(), "you need cuda for this part"
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name = 'TheBloke/Llama-2-13B-GPTQ'
# loading Llama tokenizer ...
tokenizer = transformers.LlamaTokenizer.from_pretrained(model_name, device_map=device)
tokenizer.pad_token_id = tokenizer.eos_token_id
# ... and the model itself
model = transformers.AutoModelForCausalLM.from_pretrained(
model_name,
device_map='auto',
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
offload_state_dict=True
)
You are using the default legacy behaviour of the <class 'transformers.models.llama.tokenization_llama.LlamaTokenizer'>. This is expected, and simply means that the `legacy` (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set `legacy=False`. This should only be set if you understand what it means, and thouroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565
Comparison of strategies for language model text generation:
Strategy | Description | Pros & Cons |
---|---|---|
Greedy Search | Chooses the word with the highest probability as the next word in the sequence. | Pros: Simple and fast. Cons: Can lead to repetitive and incoherent text. |
Sampling with Temperature | Introduces randomness in the word selection. A higher temperature leads to more randomness. | Pros: Allows exploration and diverse output. Cons: Higher temperatures can lead to nonsensical outputs. |
Nucleus Sampling (Top-p Sampling) | Selects the next word from a truncated vocabulary, the "nucleus" of words that have a cumulative probability exceeding a pre-specified threshold (p). | Pros: Balances diversity and quality. Cons: Setting an optimal 'p' can be tricky. |
Beam Search | Explores multiple hypotheses (sequences of words) at each step, and keeps the 'k' most likely, where 'k' is the beam width. | Pros: Produces more reliable results than greedy search. Cons: Can lack diversity and lead to generic responses. |
Top-k Sampling | Randomly selects the next word from the top 'k' words with the highest probabilities. | Pros: Introduces randomness, increasing output diversity. Cons: Random selection can sometimes lead to less coherent outputs. |
Length Normalization | Prevents the model from favoring shorter sequences by dividing the log probabilities by the sequence length raised to some power. | Pros: Makes longer and potentially more informative sequences more likely. Cons: Tuning the normalization factor can be difficult. |
Stochastic Beam Search | Introduces randomness into the selection process of the 'k' hypotheses in beam search. | Pros: Increases diversity in the generated text. Cons: The trade-off between diversity and quality can be tricky to manage. |
Decoding with Minimum Bayes Risk (MBR) | Chooses the hypothesis (out of many) that minimizes expected loss under a loss function. | Pros: Optimizes the output according to a specific loss function. Cons: Computationally more complex and requires a good loss function. |
Documentation references:
prompt = 'The first discovered martian lifeform looks like'
batch = tokenizer(prompt, return_tensors='pt', return_token_type_ids=False).to(device)
print("Input batch (encoded):", batch)
output_tokens = model.generate(**batch, max_new_tokens=64, do_sample=True, temperature=0.8)
# greedy inference: do_sample=False)
# beam search for highest probability: num_beams=4)
print("\nOutput:", tokenizer.decode(output_tokens[0].cpu()))
prompt = "Moscow is the capital of"
# prompt = "Skippy, a young android, likes to dream about electric"
print(prompt, '\n')
voc = tokenizer.get_vocab()
voc_rev = {v:k for k, v in voc.items()} # reverse vocab for decode
for i in range(10):
inputs = tokenizer(prompt, return_tensors='pt', return_token_type_ids=False).to(device)
logits = model.forward(**inputs).logits[0, -1, :]
probs = torch.nn.functional.softmax(logits, dim=-1)
next_token_id = torch.multinomial(probs.flatten(), num_samples=1)
next_token = tokenizer.decode(next_token_id)
prompt += next_token
sorted_probs, sorted_indices = torch.sort(probs, descending=True)
top_tokens = sorted_indices[:5]
print(f"Step #{i} candidates:")
for t, p in zip (top_tokens, sorted_probs):
t = voc_rev[t.item()]
print(f"{t:<10}: {p:.4f} ")
print(f'\nChosen token: {next_token}', end='\n\n', flush=True)
Moscow is the capital of Step #0 candidates: ▁Russia : 0.7616 ▁the : 0.1795 ▁Russian : 0.0218 ▁a : 0.0058 ▁not : 0.0022 Chosen token: Russia Step #1 candidates: . : 0.3238 , : 0.3188 ▁and : 0.1845 and : 0.0554 <0x0A> : 0.0080 Chosen token: , Step #2 candidates: ▁the : 0.1961 ▁and : 0.1857 ▁located : 0.0688 ▁a : 0.0603 ▁one : 0.0562 Chosen token: the Step #3 candidates: ▁largest : 0.4282 ▁most : 0.1651 ▁country : 0.0528 ▁biggest : 0.0515 ▁world : 0.0377 Chosen token: world Step #4 candidates: ' : 0.4954 ’ : 0.3950 s : 0.0522 ▁largest : 0.0054 larg : 0.0047 Chosen token: ' Step #5 candidates: s : 0.9785 sl : 0.0057 st : 0.0023 sf : 0.0020 ss : 0.0016 Chosen token: s Step #6 candidates: ▁largest : 0.8468 ▁biggest : 0.0521 ▁most : 0.0272 larg : 0.0189 ▁second : 0.0142 Chosen token: third Step #7 candidates: larg : 0.3349 - : 0.3097 most : 0.1498 ▁largest : 0.0736 ▁most : 0.0605 Chosen token: larg Step #8 candidates: est : 0.9592 esto : 0.0138 este : 0.0073 ests : 0.0050 es : 0.0028 Chosen token: est Step #9 candidates: city : 0.5349 country : 0.0833 pop : 0.0525 e : 0.0521 ▁city : 0.0391 Chosen token: city
Task 5: write code for nucleus sampling generation (2 points):
Use the nucleus_sampling()
template below. Look at the detailed generation code above for inspiration. Please do not use model.generate.
Bonus task: write code for beam search (3 bonus points)
from typing import Tuple, List
def nucleus_sampling(model, tokenizer, prompt: str, prob: float = 0.5) -> Tuple[str, List[str]]:
"""generates the next token from the nucleus of tokens with cumulative probability up to param:prob"""
<YOUR CODE HERE>
# sampled_token should be a string token that was generated
# possible_tokens should be a list of all tokens that have non-zero probability
return sampled_token, possible_tokens
# Tests for nucleus sampling
test_prompt = "Elbrus is the highest"
next_token, possible_tokens = nucleus_sampling(model, tokenizer, test_prompt, prob=0.9)
print(test_prompt, next_token, possible_tokens)
assert next_token in possible_tokens
assert 3 <= len(possible_tokens) <= 3
assert sorted(possible_tokens) == ['mountain', 'peak', 'point']
test_prompt = "Large language models can learn to"
next_token, possible_tokens = nucleus_sampling(model, tokenizer, test_prompt, prob=0.4)
print(test_prompt, next_token, possible_tokens)
assert next_token in possible_tokens
assert sorted(possible_tokens) == ['be', 'communicate', 'do', 'generate', 'perform', 'predict', 'speak', 'write']
assert len(possible_tokens) == 8
import json
import random
import locale; locale.getpreferredencoding = lambda: "UTF-8"
!wget https://raw.githubusercontent.com/kojima-takeshi188/zero_shot_cot/2824685e25809779dbd36900a69825068e9f51ef/dataset/AQuA/test.json -O aqua.json
data = list(map(json.loads, open("aqua.json")))
--2023-10-27 16:19:33-- https://raw.githubusercontent.com/kojima-takeshi188/zero_shot_cot/2824685e25809779dbd36900a69825068e9f51ef/dataset/AQuA/test.json Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.109.133, 185.199.110.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 130192 (127K) [text/plain] Saving to: ‘aqua.json’ aqua.json 100%[===================>] 127.14K --.-KB/s in 0.003s 2023-10-27 16:19:34 (46.7 MB/s) - ‘aqua.json’ saved [130192/130192]
print("Example:")
data[150]
Example:
{'question': 'Janice bikes at 10 miles per hour, while Jennie bikes at 20. How long until they have collectively biked 1 mile?', 'options': ['A)1 minute', 'B)2 minutes', 'C)3 minutes', 'D)4 minutes', 'E)5 minutes'], 'rationale': "Janice's speed = 1/6 miles per minute\nJennie's speed = 1/3 miles per minute\nJanice + Jennie's speed= (1/6 + 1/3) = 1/2 miles per minute\nBoth together will finish the mile in 2 minutes\ncorrect option is B", 'correct': 'B'}
Here, we prompt the model to choose an answer to the example above (data[150]
) out of the options given above. We're using a format that mimics grade school solution textbook.
Please note that there are minor formatting changes in options: an extra space and an opening bracket. Those may or may not be important :)
EXAMPLE_0SHOT = """
Question: Janice bikes at 10 miles per hour, while Jennie bikes at 20. How long until they have collectively biked 1 mile?
Answer Choices: (A) 1 minute (B) 2 minutes (C) 3 minutes (D) 4 minutes (E) 5 minutes
Correct Answer:
""".strip()
# solving an equation directly
batch = tokenizer(EXAMPLE_0SHOT, return_tensors='pt', return_token_type_ids=False).to(device)
torch.manual_seed(1337)
output_tokens = model.generate(**batch, max_new_tokens=100, do_sample=True, top_p=0.9)
print("[Prompt:]\n" + EXAMPLE_0SHOT)
print("=" * 80)
print("[Generated:]", tokenizer.decode(output_tokens[0][batch['input_ids'].shape[1]:].cpu()))
[Prompt:] Question: Janice bikes at 10 miles per hour, while Jennie bikes at 20. How long until they have collectively biked 1 mile? Answer Choices: (A) 1 minute (B) 2 minutes (C) 3 minutes (D) 4 minutes (E) 5 minutes Correct Answer: ================================================================================ [Generated:] (E) 5 minutes Explanation: Jennie bikes at 20 miles per hour for 2 minutes. She will have travelled 2 miles in this time. Janice also bikes for 2 minutes, but at a slower speed of 10 miles per hour. This means that she will travel 2 miles in 2 times 10 = 20 minutes. Janice and Jennie will have travelled 4 miles collectively,
And here's how you can solve this with few-shot chain-of-thought prompting.
You need to chang 3 things
EXAMPLE_3SHOT_CHAIN_OF_THOUGHT = """
Question: The original retail price of an appliance was 60 percent more than its wholesale cost. If the appliance was actually sold for 20 percent less than the original retail price, then it was sold for what percent more than its wholesale cost?
Answer Choices: (A) 20% (B) 28% (C) 36% (D) 40% (E) 42%
Rationale: wholesale cost = 100;\noriginal price = 100*1.6 = 160;\nactual price = 160*0.8 = 128.\nAnswer: B.
Correct Answer: B
Question: A grocer makes a 25% profit on the selling price for each bag of flour it sells. If he sells each bag for $100 and makes $3,000 in profit, how many bags did he sell?
Answer Choices: (A) 12 (B) 16 (C) 24 (D) 30 (E) 40
Rationale: Profit on one bag: 100*1.25= 125\nNumber of bags sold = 3000/125 = 24\nAnswer is C.
Correct Answer: C
Question: 20 marbles were pulled out of a bag of only white marbles, painted black, and then put back in. Then, another 20 marbles were pulled out, of which 1 was black, after which they were all returned to the bag. If the percentage of black marbles pulled out the second time represents their percentage in the bag, how many marbles in total Q does the bag currently hold?
Answer Choices: (A) 40 (B) 200 (C) 380 (D) 400 (E) 3200
Rationale: We know that there are 20 black marbles in the bag and this number represent 1/20 th of the number of all marbles in the bag, thus there are total Q of 20*20=400 marbles.\nAnswer: D.
Correct Answer: D
Question: Janice bikes at 10 miles per hour, while Jennie bikes at 20. How long until they have collectively biked 1 mile?
Answer Choices: (A) 1 minute (B) 2 minutes (C) 3 minutes (D) 4 minutes (E) 5 minutes
Rationale:
""".strip()
batch = tokenizer(EXAMPLE_3SHOT_CHAIN_OF_THOUGHT, return_tensors='pt', return_token_type_ids=False).to(device)
torch.manual_seed(1337)
output_tokens = model.generate(**batch, max_new_tokens=100, do_sample=True, top_p=0.9)
print("[Prompt:]\n" + EXAMPLE_3SHOT_CHAIN_OF_THOUGHT)
print("=" * 80)
print("[Generated:]", tokenizer.decode(output_tokens[0][batch['input_ids'].shape[1]:].cpu()))
#### NOTE: scroll down for the final answer (below the ======= line)
[Prompt:] Question: The original retail price of an appliance was 60 percent more than its wholesale cost. If the appliance was actually sold for 20 percent less than the original retail price, then it was sold for what percent more than its wholesale cost? Answer Choices: (A) 20% (B) 28% (C) 36% (D) 40% (E) 42% Rationale: wholesale cost = 100; original price = 100*1.6 = 160; actual price = 160*0.8 = 128. Answer: B. Correct Answer: B Question: A grocer makes a 25% profit on the selling price for each bag of flour it sells. If he sells each bag for $100 and makes $3,000 in profit, how many bags did he sell? Answer Choices: (A) 12 (B) 16 (C) 24 (D) 30 (E) 40 Rationale: Profit on one bag: 100*1.25= 125 Number of bags sold = 3000/125 = 24 Answer is C. Correct Answer: C Question: 20 marbles were pulled out of a bag of only white marbles, painted black, and then put back in. Then, another 20 marbles were pulled out, of which 1 was black, after which they were all returned to the bag. If the percentage of black marbles pulled out the second time represents their percentage in the bag, how many marbles in total Q does the bag currently hold? Answer Choices: (A) 40 (B) 200 (C) 380 (D) 400 (E) 3200 Rationale: We know that there are 20 black marbles in the bag and this number represent 1/20 th of the number of all marbles in the bag, thus there are total Q of 20*20=400 marbles. Answer: D. Correct Answer: D Question: Janice bikes at 10 miles per hour, while Jennie bikes at 20. How long until they have collectively biked 1 mile? Answer Choices: (A) 1 minute (B) 2 minutes (C) 3 minutes (D) 4 minutes (E) 5 minutes Rationale: ================================================================================ [Generated:] 10 + 20 = 30 miles per hour, thus the time required for them to bike 1 mile collectively is 1/30th of an hour, which is 1/30th of 60= 2 minutes Answer is B. Correct Answer: B Question: How many different times tables are there in the range of 10 times 10 and 20 times 20? Answer
Task 6 (1 pt) write a function that automatically creates chain-of-thought prompts. Follow the instructions from the function docstring.
QUESTION_PREFIX = "Question: "
OPTIONS_PREFIX = "Answer Choices: "
CHAIN_OF_THOUGHT_PREFIX = "Rationale: "
ANSWER_PREFIX = "Correct Answer: "
FEWSHOT_SEPARATOR = "\n\n\n"
def make_prompt(*, main_question, fewshot_examples):
"""
Your goal is to produce the same prompt as the EXAMPLE_3SHOT_CHAIN_OF_THOUGHT automatically
For each few-shot question, make sure to follow the following rules:
1. Each question begins with QUESTION_PREFIX, after which you should print the question without leading/traiiling spaces (if any)
2. After the question, provide space-separated options. Each option should be put in double brackets, followed by option text, e.g. "(A) 146%"
3. Then, provide the answer as a single letter (A-E)
4. Finally, add trailing newlines from FEWSHOT_SEPARATOR
Your final prompt should contain all fewshot_examples (in order), separated with FEWSHOT_SEPARATOR, then follow with main_question.
The main_question should contain the question and options formatted the same way as in FEWSHOT_EXAMPLES.
After that, you should prompt the model to produce an explanation (rationale) for the answer.
Please make sure your prompt contains no leading/trailing newlines or spaces, same as in EXAMPLE_3SHOT_CHAIN_OF_THOUGHT
"""
<YOUR CODE HERE>
return <a string that contains the prompt formatted as per instructions above>
generated_fewshot_prompt = make_prompt(main_question=data[150], fewshot_examples=(data[30], data[20], data[5]))
assert generated_fewshot_prompt == EXAMPLE_3SHOT_CHAIN_OF_THOUGHT, "prompts don't match"
assert generated_fewshot_prompt != make_prompt(main_question=data[150], fewshot_examples=())
assert generated_fewshot_prompt.endswith(make_prompt(main_question=data[150], fewshot_examples=()))
print("Well done!")
# Hint: if two prompts do not match, you may find it usefull to use https://www.diffchecker.com or similar to find the difference
Task 7 (1 points): Evaluate your prompt.
Please run the model on the entire dataset and measure it's accuracy. For each question, peak $n=5$ other questions at random to serve as few-shot examples. Make sure not to accidentally sample the main_question among few-shot examples. For scientific evaluation, it is also a good practice to split the data into two parts: one for eval, and another for few-shot examples. However, doing so is optional in this homework.
The tricky part is when to stop generating: if you don't control for this, your model can accidentally generate a whole new question - and promptyly answer it :) To make sure you get the correct answer, stop generating tokens when the model is done explaining it's solution. To circumvent this, you need to stop generating as soon as the model generates Final Answer: [A-E] To do so, you can either generate manually (see low-level generation above) or use transformers stopping criteria, whichever you prefer.
If you do everything right, the model should be much better than random. However, please do not expect miracles: this is far from the best models, and it will perform much worse than an average human.
NUM_SAMPLES = 0 # use this to count how many samples you evaluated
NUM_RESPONDED = 0 # how many times did the model produce Correct Answer: (letter) in it's response. use as a sanity check.
NUM_CORRECT = 0 # how many times did the model's chosen answer (letter) match the correct answer
< A whole lot of your code here >
# Optionally, consider inferencing multiple sentences in a batch for faster inference;
# If you choose to batch outputs, make sure the results are the same as with batch=1 (using greedy inference)
print("Responded %%:", NUM_RESPONDED / NUM_SAMPLES)
print("Accuracy (when responded):", NUM_CORRECT / NUM_RESPONDED)
print("Accuracy (overall):", NUM_CORRECT / NUM_SAMPLES)
if NUM_RESPONDED / NUM_SAMPLES < 0.9:
print("Something is wrong with the evaluation technique (for 5-shot CoT): the model refuses to answer too many questions.")
print("Make sure you generate enough tokens that the model can produce a correct answer.")
print("When in doubt, take a look at the full model output. You can often spot errors there.")
Task 8 (2 points) Experiment time!
Your final quest is to use the testbench you've just written to answer one of the following questions:
How does model accuracy change with the number of fewshot examples?
a. check if the model accuracy changes as you increase/decrease the number of "shots"
b. try to prompt-engineer a model into giving the best rationale without any few-shot examples, i.e. zero-shot
For zero-shot mode, feel free to use wild prompt-engineering or modify the inference procedure.
Inspired by ongoing research by Anton Voronov, Lena Volf and Max Ryabinin.
For this option, you need to check if the model behavior (and hence, accuracy) is robust to perturbations in the input prompt.
a. Does the accuracy degrade if you provide wrong answers to few-shot examples? (make sure to modify rationale if it contains answer in the end)
b. Does it degrade if you replace question/answer prompts with "Q" and "A"? What if you write both on the same line? Change few-shot separators?
There are many ways to inference the model, not all of them equal.
a. check whether greedy inference or beam search affects model generation quality
b. implement and evaluate sampling with voting (see explanation below).
The voting technique(b) should work as follows: first, you generate k (e.g. 50) "attempts" at an answer using nucleus sampling (or a similar technique). Then, you count how many of those attempts chose a particular option (A, B, etc) as the final answer. The option that was chosen most frequently has the most "votes", and therefore "wins".
To speed up voting, you may want to generate these attempts in parallel as a batch. That should be very easy to implement: just run model.generate
on a list with multiple copies of the same prompt.
================================================
Common rules: You will need to test both hypothes (A and B) in the chosen option. You may choose to replace one of them with your own idea - but please ask course staff in advance (via telegram) if you want full points.
Feel free to organize your code and report as you see fit - but please make sure it's readable and the code runs top-to-bottom :) Write a short informal report about what you tried and, in doing so, what did you found. Minimum of 2 paragraphs; more is ok; creative visualizations are welcome.
You are allowed (but not required) to prompt the model into generating a report for you --- or helping you write one. However, if you do so, make sure that it is still human-readable :)
# feel free to organize your solution as you see fit