A hypernetwork is an 80MB+ file that sits on top of a model and can learn new things not present in the base model.It takes a lot of VRAM to train and has a large file size. A model is a 2GB+ file that can do basically anything.The main advantage of embeddings is their flexibility and small size. To keep it brief, there are 3 other options to using an embedding: models, hypernetworks, and LoRAs. If you train an embedding on a single person, it should make all people look like that person. For example, if you train an embedding on Van Gogh paintings, it should learn that style and turn the output image into a Van Gogh painting. This is a collection of all the lessons I've learned and suggested settings to use when training an embedding to learn a person's likeness.Īn embedding is a special word that you put into your prompt that will significantly change the output image. I've been practicing training embeddings for about a month now using these settings and have successfully made many embeddings, ranging from poor quality to very good quality. This is not a step-by-step guide, but rather an explanation of what each setting does and how to fix common problems. This guide assumes you are using the Automatic1111 Web UI to do your trainings, and that you know basic embedding related terminology. This is a guide on how to train embeddings with textual inversion on a person's likeness.