Virtual try-on for furniture is a well-established concept and major retailers deploy AR to anchor photorealistic couches, tables, and lamps inside a buyer’s home before purchase. The IKEA app – among others – popularized this pattern: clean catalog inputs, standardized product photography, stable geometry. This solves one problem – reducing uncertainty when buying new furniture – but leaves a larger opportunity untouched. The secondhand market, driven by volatility in item condition and presentation, lacks equivalent tooling.

Used furniture might face slower adoption because buyers often struggle to assess fit, scale, or style from fragmented photos. This uncertainty may dampen secondhand transactions and, one might argue, limit progress toward circular-economy objectives. Circular markets such as Norway’s FINN.no operate under such challenging conditions: inconsistent lighting, cluttered backgrounds, nonstandard geometry, and one-off items. The underlying idea is that allowing buyers to preview used items within their own spaces has the potential to ease ambiguity, lessen reliance on new retail, and gradually support a more sustainable pattern of consumption. One might suggest that the sustainability gap, in this case, is more technical than cultural. If reliable visualization tools for used objects become widely available, they may create conditions that could produce environmental benefits, potentially longer product lifecycles and reduced material throughput.

Our Work: AI-powered Virtual Try-On in AR

To explore this case, we launched a small-scale exploratory project, “RAG LLM – Improve Refurbishing Processes for More Climate-Positive Practices”, supported by the Basic Funding through the Research Council of Norway. The work focused on building an AI-based workflow that generates an AR virtual try-on from a single furniture photo. An LLM-driven user interface (UI) was implemented to trigger the pipeline. The system uses OpenAI‘s GPT-4o-mini API as the dialogue engine, RemBG for background removal on furniture images, Meshy AI for converting those images into 3D models, and Web-AR Studio for rendering the models in augmented reality. The full workflow appears in the diagram below and in the accompanying video.

In this exploratory scenario, the pipeline is triggered by a conversational assistant. In most practical cases, a simple UI button would be technically cleaner. The button would either launch the pipeline described here or, if the 3D model of the item was already generated when the user created the listing in the circular marketplace, it would only load the model and activate the AR component. In that case, the interface could resemble the mockup shown below (made with Figma).

An Alternative: AI-powered Virtual Try-on via Synthetic Media

To explore an alternative approach to AI-powered virtual try-on, we investigated the use of AI-generated images and videos. The workflow employs OpenAI‘s GPT-4o-mini API as the dialogue engine, RemBG for background removal on furniture images, OpenAI’s GPT-Image-1 API for image synthesis, and LumaAI’s Dream Machine for AI-based video generation. The system requests a photograph of the user’s room along with specific textual instructions indicating where furniture should be placed within the scene. The final output is an AI-generated image depicting the furniture positioned in the room, with an optional video if requested by the user (a 5-sec camera fly-through in this case). An additional concept, not implemented in this work, involves enabling furniture placement through a drawing-based user interface in which the user directly points to or sketches the desired placement on the uploaded room photo, supplemented by a prompt for further specification. A suitable source of inspiration for such an interface is the SpacelyAI virtual staging platform.

Conclusion

In conclusion, virtual try-on using AI and AR represents a highly promising development in how users interact with products in digital environments. The underlying technologies are advancing at a rapid pace, which makes it impractical to remain static or to recommend specific tools as long-term solutions, since today’s capabilities are likely to be surpassed quickly by more powerful models. Recent industry developments indicate that progress is moving in the right direction, as demonstrated – a few days after we concluded the work described above by Meta’s introduction of SAM 3D, which directly targets this type of spatial understanding and object interaction use case. Taken together, these trends suggest that AI- and AR-driven virtual try-on supports more sustainable, circular economy practices by reducing waste and unnecessary production, and is therefore likely to see broader adoption in the near future.

Work conducted by Maria Emine Nylund, Jiaxin Li, Ophelia Prillard, and Costas Boletsis. Blogpost written by Costas Boletsis.