p-image-edit-trainer

Train custom LoRAs for image editing transformations

Overview

p-image-edit-trainer allows you to train custom LoRA (Low-Rank Adaptation) weights for use with the p-image-edit-lora model. Train personalized image transformations, style transfers, and custom editing behaviors using pairs of before/after images.

This is NOT an inference model. It does not edit images. Instead, it outputs a ZIP file containing trained LoRA weights (.safetensors).

Rate Limit: 5 requests per minute

Category: LoRA Training

Price: $4.00 / 1000 steps

Important Notes:

  • Async only - Training takes minutes to hours. Do not use Try-Sync header.
  • Download within 30 minutes - The output URL expires approximately 30 minutes after training completes. Download immediately and upload to HuggingFace for permanent storage.
  • Trained LoRAs only work with p-image-edit-lora, not with other models.

Workflow

  1. Prepare image pairs - Create a ZIP file with before/after image pairs using _start/_end naming
  2. Upload ZIP to accessible URL - Host your training data somewhere accessible
  3. Start training - Submit async request (takes minutes to hours)
  4. Poll for completion - Check status until training succeeds
  5. Download output - Get the ZIP file within 30 minutes
  6. Upload to HuggingFace - Store the .safetensors file for permanent access
  7. Use with p-image-edit-lora - Edit images using your trained LoRA

Quickstart

Prepare Training Data (Image Pairs)

Create a ZIP archive with before/after image pairs. Images must follow the _start/_end naming convention:

training_data.zip
├── photo_start.jpg       # Before image
├── photo_end.jpg         # After image (transformed)
├── photo.txt             # Optional: caption describing the transformation
├── landscape_start.png   # Another before image
├── landscape_end.png     # Corresponding after image
├── landscape.txt         # Optional: caption
└── ...

Naming Convention:

  • Before image: <ROOT>_start.<EXT>
  • After image: <ROOT>_end.<EXT>
  • Caption (optional): <ROOT>.txt

Multiple Reference Images (Optional): For complex transformations, you can include multiple "before" references:

example_start.jpg      # Primary before image
example_start2.jpg     # Additional reference
example_start3.jpg     # Additional reference
example_end.jpg        # After image
example.txt            # Caption

Start Training (Async Only)

curl -X POST 'https://api.pruna.ai/v1/predictions' \
  -H 'Content-Type: application/json' \
  -H 'apikey: YOUR_API_KEY' \
  -H 'Model: p-image-edit-trainer' \
  -d '{
    "input": {
      "image_data": "https://your-storage.com/edit_pairs.zip",
      "steps": 1000,
      "default_caption": "apply the trained transformation"
    }
  }'

Response:

{
  "id": "training456xyz",
  "model": "p-image-edit-trainer",
  "input": { ... },
  "get_url": "https://api.pruna.ai/v1/predictions/status/training456xyz"
}

Poll for Completion

Training takes minutes to hours depending on steps. Poll periodically:

curl -X GET 'https://api.pruna.ai/v1/predictions/status/training456xyz' \
  -H 'apikey: YOUR_API_KEY'

When complete:

{
  "status": "succeeded",
  "output": "https://api.pruna.ai/v1/predictions/delivery/xezq/abc123.../lora_weights.zip"
}

Download Output Immediately

The output URL expires in ~30 minutes. Download the ZIP file immediately:

curl -o lora_output.zip "https://api.pruna.ai/v1/predictions/delivery/xezq/abc123.../lora_weights.zip"

Upload to HuggingFace

Extract and upload the .safetensors file to HuggingFace:

unzip lora_output.zip
# Upload lora.safetensors to huggingface.co/your-username/my-edit-lora

Use with p-image-edit-lora

curl -X POST 'https://api.pruna.ai/v1/predictions' \
  -H 'Content-Type: application/json' \
  -H 'apikey: YOUR_API_KEY' \
  -H 'Model: p-image-edit-lora' \
  -d '{
    "input": {
      "prompt": "Apply the trained transformation to image 1",
      "images": ["https://example.com/input.jpg"],
      "lora_weights": "huggingface.co/your-username/my-edit-lora"
    }
  }'

Parameters

Required Parameters

ParameterTypeDescription
image_datastring (URI)URL to a ZIP archive with image pairs. Images must be named ROOT_start.EXT and ROOT_end.EXT. Can include multiple references (ROOT_start2.EXT, etc.) and text files for captions (ROOT.txt)

Optional Parameters

ParameterTypeDefaultDescription
stepsinteger1000Number of training steps. Range: 100-5000, in increments of 100. More steps = longer training, potentially better results
learning_ratenumber0.0001Learning rate for training. Range: 0.00001-0.01. Lower = slower but more stable
default_captionstring-Default caption for image pairs without .txt files. If not provided and captions are missing, training fails

Steps Guidelines

StepsUse CaseExpected Time
100-500Quick tests, simple transformsMinutes
500-1000Standard training10-30 minutes
1000-2000High quality, complex transforms30-60 minutes
2000-5000Maximum quality1-2+ hours

Example Use Cases

Transformation TypeTraining Data Example
Style transferPhotos paired with artistic renditions
Day-to-nightDaytime scenes paired with nighttime versions
Season changesSummer scenes paired with winter versions
Enhancement filtersOriginal images paired with enhanced versions
Custom effectsBefore/after pairs showing your custom transformation