date
type
status
slug
summary
tags
category
icon
password
December 12, 2023 • 3 min read
by Simon Meng, mp.weixin.qq.comSee original
I, a programming novice, have created a parallel space-time portal—GaussianSpace, a tool that allows users to guide the editing of three-dimensional Gaussian splatting (3D Gaussian Splatting) in large scenes using text. 🐶
Recently, the 3D Gaussian technology has achieved an incredible level of detail in real scene 3D reconstruction. As a former architect, it’s easy to imagine that if we add text guidance for overall modifications, we could create parallel worlds 😮! I originally didn’t want to reinvent the wheel, but after exploring, I found that existing methods for text-guided editing of 3D Gaussian are primarily based on instruct pix2pix, which only allows for local editing 😂.
So, I had no choice but to tackle it myself 🧐. Based on the original 3D Gaussian loss function, I added a score distillation sampling (SDS) loss function based on a 2D stable diffusion model and introduced an Automatic Weighted Loss method to balance SDS Loss and real image Loss. This ensures that the overall loss function steadily decreases during iterations, allowing the edited Gaussian scene to maintain the original structural characteristics while responding to text guidance, ultimately achieving a successful transition to the parallel space-time! 🥹
I successfully created three satisfactory parallel spaces—migrating from the Graz Armory Museum to the Cyber Machine Armory, the Abandoned Biological Exhibition, and the Fantasy Toy House! Note that this is not a video; it is an interactive full 3D scene (rotate, zoom, pan)! 🫠
➡️ The migrated 3D Gaussian parallel spaces can be interacted with at the following URL (please open in Chrome, as the web rendering effect is slightly worse than local rendering): *https://showcase.3dmicrofeel.com/armour_museum-house.html*
➡️ More information can be found on our Git page (requires a VPN): *https://gaussianspace.github.io/*
🤔 PS: This is just the initial run of the technical pipeline; many buffs have yet to be implemented. I should be able to further improve the quality (I initially wanted to make it more polished before posting, but the pace is too fast right now, so I decided to share it first to secure my spot 😂)! I hope to make it available to everyone in some way at the right time. 🤗
 
相关文章
DreamGaussian: The Stable Diffusion Moment of AIGC 3D Generation
Lazy loaded image
How I Used AI to Create a Promotional Video for Xiaomi's Daniel Arsham Limited Edition Smartphone
Lazy loaded image
3D scene editing has entered the era of AI text interaction
Lazy loaded image
Works Series - Dimensional Recasting
Lazy loaded image
The 2022 Venice - Metaverse Art Annual Exhibition: How Nature Inspires Design
Lazy loaded image
Hidden Time Space 5: Cloudscape Artistry — "Vintage" AI Model Generates Retro Chinese-Style Landscape Animation
Lazy loaded image
AI-generated solutions for simplifying and solving 3D problems, along with the new challenges that arise.I have developed an open-source AI tool for generating 3D models from text, Dreamfields-3D
Loading...
Simon Shengyu Meng
Simon Shengyu Meng
AI artist driven by curiosity, cross-disciplinary researcher, PhD candidate, science communication blogger.
最新发布
Works Series - Boundless Intelligence
Oct 13, 2024
Works Series - RE-Imaginate nature
Oct 8, 2024
Supernova Explosion | Simon Meng | AI Genesis
Oct 5, 2024
At the end of 2023, I want to share two comforting AI tools with you and have a heartfelt chat.
Oct 5, 2024
"Slacking Off" but Accidentally Discovering a New Vision for AI — Interview with GAAC Contestants Wang Zheng and Meng Shengyu
Oct 5, 2024
Using AI to transcend every doomsday of humanity until the end of the universe!
Oct 5, 2024
公告
--- About me ---
--- Contact Me ---
Design and Art Creation | AIGC Consultation and Training | Commercial Deployment