User Guide Cancel

Tips for training Firefly custom models

Learn how enterprise users can train Firefly custom models to reflect their brand.

Training essentials

To train custom models, gather a diverse and representative data set for better performance and accuracy.

Images

  • Use JPG or PNG files.
  • Choose at least 10-30 high-quality images that showcase the brand-specific styles and concepts subjects you want to achieve.
  • Capture a varied set of images representing the style or subject.
  • Ensure that each image file size does not exceed 50 MB.
  • Ensure the images have a resolution higher than 1024x1024 pixels with a maximum 16:9 aspect ratio for landscape or 9:16 for portrait. 
  • Keep the aspect ratio consistent with the training dataset. If the training set is in portrait, and you generate square images, they will have cutoff issues upon generation. 
  • Crop your sample images to focus on the most important visual elements.
  • Include images displaying various viewpoints and backgrounds while maintaining a consistent aesthetic.

Captions

  • Use captions to enhance detail and train the custom models on concepts you want the model to generate. 
  • Keep image captions short and concise. We recommend no more than 15-18 words.
  • Vary sentence structure across all your image captions.
  • Modify auto captions as needed to inform the model of the details of the concept.
  • The Firefly base model does not know famous people or places, so captions should include descriptions of these places to improve potential outcomes.
  • Ensure captions include key phrases if you are training a specific Subject or Object – give the subject clear detail in the caption for each image.
  • Give 2/3 of training data captions using keywords and descriptive details to better train the model outcome.
    • For example, if there is a brand logo in the images, refer to that as “Adobe logo” to help the model identify the concept is identified in a portion of the training data – don't do this in 1/3 of the image captions, this way the model will learn to identify and generate the logo.

Testing your models prior to publishing 

Prompts

  • Use shorter, more precise prompts to better honor the subject and style.

Text to image settings panel 

  • The Visual intensity slider is set to lowest by default for optimal identity preservation – however, for creative use cases such as Style reference, increasing visual intensity can produce more vibrant results.  
  • When using Composition references for subjects, opt for images with white backgrounds or sketches depicting the subject in the desired pose. 

Model-specific best practices

Subject models

Custom models trained on a subject, such as objects, characters, or products, will identify key features of the subject and attempt to replicate it in different positions and environments.

Images

  Sample images to use

Set of four images showing a blue furry character with consistent style across images.
For best results, select a set of images that show the same subject with consistent style across images.

  Sample images to avoid

Set of four images showing varying characters with different style across images.
Low-quality images with varying subjects and styles can result in less effective models and hallucinations.

Look for images with the following characteristics when training a subject model:
  • Object consistency: Provide images of the same make and model as your subject while ensuring that the subject doesn't look widely different across images. Avoid mixing multiple colors and ensure a common theme or pattern among images. However, your subject can vary across scenes, poses, clothing, and background.
  • Object focus: Use images of the subject in clear focus without unnecessary distractions. Keep the subject near the center of the image and make sure that it occupies at least 25% of the image's area.
  • Environmental context: Provide images of the subject in different views and contexts, showing it in a variety of lighting conditions. While images with white or transparent backgrounds can be used, it's best to have a mix with more complex surroundings as well.
  • Avoid other objects: Avoid large items in the background or associated with the character. Any large item shown in the images is memorized by the model and will appear in the generated images, similar to the same item in the training dataset. 

Captions

  • Concisely describe the image and objects distinct from the subject in the caption. This helps the model distinguish the subject from other objects
  • When captioning, if a key adjective or descriptor co-occurs with the concept and you would like the concept to always be generated matching that adjective, it should be included in the concept to get the best results. For example, you want a custom model to generate gray cats, so your concept is "cat", and your adjective is "gray". If you always want the model to generate a gray cat, you should include that in the concept and make the concept "gray cat".

Concept

  • Ensure that the Concept entered has at least three characters. 
  • Use a proper noun that represents your model's main subject. For example, if your model was trained on your dog, "Spot," enter "Spot" in the Concept field.

Style models

Custom models trained on a style will identify the look and feel of the assets to generate similar images when prompted.

  Sample images to use

Set of four images with consistent style across images.
For best results, use a set of images with the same colors and aesthetic.

  Sample images to avoid

Set of four images with varying style across images.
Images with varying colors and aesthetics can result in less effective models.

Images

To train an effective style model:
  • Provide similar aesthetics: Include images that show various scenes and objects while maintaining the same look and feel.
  • Use various images: Use as many images as you can to prevent the model from focusing too much on unwanted objects or subjects.
  • Avoid any fixed phrases: A fixed pattern has a bigger weight than other phrases. For example, if every caption contains "The background is solid black" or “cute cartoon styles” the model will depend on this phrase, and any testing prompt without it will not generate the desired results. 

Captions

  • Keep image captions short and concise. We recommend no more than 15-18 words. 
  • The captions should call out details that allow Firefly to gain context about the style. 
  • Focus on what is unique about each image using terminology that your Generators will use when prompting, such as “human figure” or “pink bulleted circles.”
  • Avoid any fixed phrases, as a fixed pattern has bigger weight than other phrases. For example, if every caption contains "The background is solid black" or “cute cartoon styles” the model will depend on this phrase, and any testing prompt without it will not generate the desired results. 

 Adobe

Get help faster and easier

New user?

Adobe MAX 2024

Adobe MAX
The Creativity Conference

Oct 14–16 Miami Beach and online

Adobe MAX

The Creativity Conference

Oct 14–16 Miami Beach and online

Adobe MAX 2024

Adobe MAX
The Creativity Conference

Oct 14–16 Miami Beach and online

Adobe MAX

The Creativity Conference

Oct 14–16 Miami Beach and online