After I made my full photo archive available for free sume reddit users that I thank like NobodyButMeow created a Qwen Image Lora after my photos. What stroke me was that using the initial caption text the photos resemble the original a lot, as you can se bellow.
I have to mention that I am also using a WAN 2.2 refiner like in the workflow here .
The LORA is available here, no triggerwords needed.
Here is a sample prompt for the second image :
“A landscape at sunset, featuring a prominent, conical mountain in the foreground. The mountain is covered with snow, and its peak is illuminated by the setting sun, casting a warm, golden glow across the scene. The sky is filled with dramatic clouds, adding depth and texture to the composition. In the foreground, there is a small waterfall cascading over a rocky surface, partially covered in ice and snow. The water appears to be flowing gently, creating a sense of tranquility. The background reveals a vast, open landscape with more mountains and a body of water reflecting the sunset colors.”












first reaction was “g’dam that’s freaky close” but if the lora was trained using only your pictures and captions that means it’s closer to retrieval then generation – because you’re prompting with the caption, so it’s not that strange that it pulled so close to the original. or am i wrong?