1/4
Looks like no tags are added yet.
Name | Mastery | Learn | Test | Matching | Spaced |
---|
No study sessions yet.
What are some different LLM parameters?
Temperature: controls the randomness/creativeness of an output. For classification tasks,we want the level lower ( like 0.1). Low temp means that the model is more focused which is good for factual tasks. High temp means that the model is more creative which can be good for brainstorming but can also be less accurate.
Top-p: controls the range of words the model considers. Lower value - focus on the most likely words; more predictable.
Max_tokens: limits the length of the output.
What is fine tuning?
Fine tuning is the process of taking a pre-trained model and further training it on a domain-specific dataset. It allows for the user to gain specific outputs without having to make a new LLM.
What are some different types of fine-tuning?
Supervised fine-tuning: model is further trained on a labelled dataset specific to the task.
Few-shot learning: providing the model with a few examples of the required task at the beginning of the input prompt. Gives the model better context when forming output.
What are the steps of supervised learning?
1 - Construct/create a training set ( a large dataset of examples)
2 - Apply the supervised learning method - manually performing the required tasks to analyse the relationship between features (phrases, keywords, patterns) and categorise in the training set.
3 - Use the trained dataset to infer the labels in the test set using, for instance, an ANN, so to imitate in the best way possible the human labelling.
4 - Validate the output.
5 - Classify the remaining documents.
What is unsupervised learning and what are its benefits?
Unsupervised learning - trained to find patterns in written text that correspond to the specific task and repeat them.
If that dataset contains sufficient examples the model will learn to perform it
Can be further fine tuned by giving specific prompt completion couples.