AI Artwork Generation & Visual Direction
To build the visual world around the interface, I generated original character artwork using Stable Diffusion’s ComfyUI, a node-based AI generation platform. This workflow allowed me to go beyond text prompts, giving me granular control over parameters such as lighting, color balance, and stylistic consistency.
Through this method, I was able to iterate rapidly, fine-tuning each output to match the project’s fantasy tone and cohesive style. The node-based structure gave me the ability to maintain consistent results across multiple designs—an essential factor for building a unified visual identity.
The AI-generated imagery was further refined and composited in Photoshop, ensuring balance between illustration detail and interface clarity. These visual elements were then integrated into the Figma prototype, creating a seamless blend between artwork and interface design.
Tools Used
Figma: Component-based prototyping, design system creation, and responsive layout
Stable Diffusion (ComfyUI): Node-based AI image generation for consistent fantasy artwork
Photoshop: Image refinement, compositing, and final visual polishing